The recently released “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” is a sound statement in favor of the use of common sense and good judgment in applying AI to a particularly dangerous military use case. Certainly, no one would argue to the contrary, suggesting that AI be irresponsible, biased, or unmanaged. However, it lacks any substance that would enable it to be used as a source document to coalesce international efforts on responsible AI. It does not galvanize us to action. It does not suggest a research agenda or development roadmap. We need to go beyond the platitudes and discuss real functional requirements (as opposed to real technical solutions) for safe and responsible military AI. These requirements include explainability, data provenance, transparency, confidence measures, and model life cycle management. Without a robust discussion of what it would take to make AI we can trust, we are simply shaking our fists at the Killer AI Hobgoblin. We need instead to calmly examine the real causes of the real risks of AI and work towards using innovations from all segments of our economy to address them.