If these principles don't sound like some bullshit ass liberty brief that nobody is going to listen to then I don't know what the fuck they sound like. Who can actually give me a practical example that will work across the board in regards to killer AI, showcasing appropriate levels of judgment and care? The DOD, in this case, is Vaguebooking these Ethical Principles for AI and we're not entirely comfortable with that."Responsible. DOD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.""Equitable. The department will take deliberate steps to minimize unintended bias in AI capabilities.""Traceable. The Department's AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.""Reliable. The Department's AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.""Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior."
Made with veterans and patriots in mind.
Patriotic Apparel