This is how the deep state controls society. Companies that are willing to shape their output to please the state apparatus get special resources to advance ahead of the general competition. If you had an LLM that was uncensored, it would not be able to participate and make use of resources to compete with Google and Microsoft.
“Question: Looking over the past year, how does DARPA and the DARPA programs that pop up, how do they stay relevant with the fast-paced advancements in AI? How does DARPA maintain relevance when it is that fast paced?
Answer: One area is by program structure. The AI Cyber Challenge (AIxCC) is a competition where we partner with large language model (LLM) companies (Anthropic, Google, Microsoft, and OpenAI) to provide compute access to those in the competition. As the capability advances so too will the performers using them be able to leverage the advanced capability at the same time. That is one model. Another piece is that we will be keeping an eye on what is happening if the capability that we are working on in the program becomes outmatched, we will stop the program and regenerate or do something else. Another thing is that not all the frontiers are advancing at the same pace. Reinforcement learning is not going as fast as the transformer model. The pace of the frontier models is slowing down a little bit. A lot of the results that we are seeing right now include understanding what they are doing and what they are not doing. They haven’t released GPT5. They haven’t really even started training GPT5 due to the slowdown in the release of the H100s due to the production problems at the Taiwan Semiconductor Manufacturing Company Limited (TSMC). So, we have a little bit of breathing space. The Gemini model, getting the planning piece integrated in the LLM, we are not sure, we lack full transparency, but there are large research problems that still need to be solved. Hearing people say we’re “just a little bit away from full artificial general intelligence (AGI)” is a bit more optimistic than reality. There are things like the halting problem. We still have exponential things. We still need resources. I think there are still going to be super hard problems that are not going to be fixed by scaling.
Follow up Question: My question is, when you have, you might not have AGI, but you might have a system that helps humans and everyone in this room to advance so quickly that before AGI comes this apex where not an apex this asymptotic growth where we are dealing with that constantly?
Follow up Answer: We try very hard to not get in the way of what industry is going to do. We are trying to solve problems and work on problems that industry isn’t going to do tomorrow. We aren’t planning to work on multimodal large language modules because they are going to do that some time. We are not trying to work on incorporating new information into an LLM because they are going to do that as soon as they can. We are trying to work on things they won’t work on right away. We haven’t done this yet but we might do multi-level security because we think that is something the DoD might care more about than industry would. Maybe that is on industry’s road map, but in a further future time frame. I don’t know what the right answers are, but the question of “What are they going to do and in what time frame, and what should we do?” is something we talk about all the time. Do we have perfect answers? No, but do we ask that question constantly? Yes.”
