Executive Order on AI, but Impact remains uncertain
Nov 5, 2023
This article was written before the EU AI Act in discussion soon after. President Biden has issued a landmark executive order on artificial intelligence (AI), aimed at ensuring that America leads the way in developing and using this powerful technology responsibly. The order establishes new standards for AI safety and security, protects Americans' privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, and advances American leadership around the world.
Read the original executive order here
The order only applies to a handful of companies that develop very large language models (LLMs), such as OpenAI, Google, Anthropic, Facebook (Meta), and Microsoft. Smaller LLMs, which are more common in the open-source AI ecosystem, are exempt from the requirements of the executive order.
This means that open-source AI developers can continue to operate without regulation. It is expected that in the next year or two, 70B LLMs or ensembles of 70B LLMs will be able to match or even beat the performance of GPT-4 on specific tasks. This is because LLM training is becoming more and more efficient, and decent LLMs can now be trained with a budget of less than $10 million.
This puts pressure on companies to innovate and match performance below the thresholds set by the executive order. However, it is unclear why larger LLMs are considered to be a greater danger to national security than smaller LLMs, which could be purpose-built to generate misinformation.
One possible explanation is that the architects of the executive order believe that larger LLMs will be more powerful than smaller LLMs and are trying to slow down the companies that have the resources to develop them. In a way, this could be seen as helping the open-source AI ecosystem by not requiring smaller LLMs to comply with the regulations.
However, it is also possible that the government will reduce the thresholds in the future, making it harder for open-source AI developers to operate. Overall, the impact of the Biden AI Executive Order on open-source AI is uncertain. Monitoring the situation and seeing how the regulations are implemented is important.
Here is a summary of the key points:
- Biden issues executive order on AI, establishing new standards for safety, security, privacy, equity, and more.
- Order only applies to a handful of companies that develop very large language models (LLMs).
- Smaller LLMs, more common in open-source AI ecosystems, are exempt from requirements.
- 70B LLMs expected to match or beat GPT-4 performance on specific tasks in the next year or two.
- Companies pressured to innovate and match performance below thresholds set by order.
- Unclear why larger LLMs are considered a greater danger to national security than smaller LLMs.
- Possible explanation: architects of order believe larger LLMs are more powerful and trying to slow down companies that can develop them.
- This could help the open-source AI ecosystem by not requiring smaller LLMs to comply with regulations.
However, the government could reduce thresholds in the future, making it harder for open-source AI developers to operate.
Overall, the impact of the Biden AI Executive Order on open-source AI remains uncertain.
Read the original executive order here
The order only applies to a handful of companies that develop very large language models (LLMs), such as OpenAI, Google, Anthropic, Facebook (Meta), and Microsoft. Smaller LLMs, which are more common in the open-source AI ecosystem, are exempt from the requirements of the executive order.
This means that open-source AI developers can continue to operate without regulation. It is expected that in the next year or two, 70B LLMs or ensembles of 70B LLMs will be able to match or even beat the performance of GPT-4 on specific tasks. This is because LLM training is becoming more and more efficient, and decent LLMs can now be trained with a budget of less than $10 million.
This puts pressure on companies to innovate and match performance below the thresholds set by the executive order. However, it is unclear why larger LLMs are considered to be a greater danger to national security than smaller LLMs, which could be purpose-built to generate misinformation.
One possible explanation is that the architects of the executive order believe that larger LLMs will be more powerful than smaller LLMs and are trying to slow down the companies that have the resources to develop them. In a way, this could be seen as helping the open-source AI ecosystem by not requiring smaller LLMs to comply with the regulations.
However, it is also possible that the government will reduce the thresholds in the future, making it harder for open-source AI developers to operate. Overall, the impact of the Biden AI Executive Order on open-source AI is uncertain. Monitoring the situation and seeing how the regulations are implemented is important.
Here is a summary of the key points:
- Biden issues executive order on AI, establishing new standards for safety, security, privacy, equity, and more.
- Order only applies to a handful of companies that develop very large language models (LLMs).
- Smaller LLMs, more common in open-source AI ecosystems, are exempt from requirements.
- 70B LLMs expected to match or beat GPT-4 performance on specific tasks in the next year or two.
- Companies pressured to innovate and match performance below thresholds set by order.
- Unclear why larger LLMs are considered a greater danger to national security than smaller LLMs.
- Possible explanation: architects of order believe larger LLMs are more powerful and trying to slow down companies that can develop them.
- This could help the open-source AI ecosystem by not requiring smaller LLMs to comply with regulations.
However, the government could reduce thresholds in the future, making it harder for open-source AI developers to operate.
Overall, the impact of the Biden AI Executive Order on open-source AI remains uncertain.
Read the original executive order here
The order only applies to a handful of companies that develop very large language models (LLMs), such as OpenAI, Google, Anthropic, Facebook (Meta), and Microsoft. Smaller LLMs, which are more common in the open-source AI ecosystem, are exempt from the requirements of the executive order.
This means that open-source AI developers can continue to operate without regulation. It is expected that in the next year or two, 70B LLMs or ensembles of 70B LLMs will be able to match or even beat the performance of GPT-4 on specific tasks. This is because LLM training is becoming more and more efficient, and decent LLMs can now be trained with a budget of less than $10 million.
This puts pressure on companies to innovate and match performance below the thresholds set by the executive order. However, it is unclear why larger LLMs are considered to be a greater danger to national security than smaller LLMs, which could be purpose-built to generate misinformation.
One possible explanation is that the architects of the executive order believe that larger LLMs will be more powerful than smaller LLMs and are trying to slow down the companies that have the resources to develop them. In a way, this could be seen as helping the open-source AI ecosystem by not requiring smaller LLMs to comply with the regulations.
However, it is also possible that the government will reduce the thresholds in the future, making it harder for open-source AI developers to operate. Overall, the impact of the Biden AI Executive Order on open-source AI is uncertain. Monitoring the situation and seeing how the regulations are implemented is important.
Here is a summary of the key points:
- Biden issues executive order on AI, establishing new standards for safety, security, privacy, equity, and more.
- Order only applies to a handful of companies that develop very large language models (LLMs).
- Smaller LLMs, more common in open-source AI ecosystems, are exempt from requirements.
- 70B LLMs expected to match or beat GPT-4 performance on specific tasks in the next year or two.
- Companies pressured to innovate and match performance below thresholds set by order.
- Unclear why larger LLMs are considered a greater danger to national security than smaller LLMs.
- Possible explanation: architects of order believe larger LLMs are more powerful and trying to slow down companies that can develop them.
- This could help the open-source AI ecosystem by not requiring smaller LLMs to comply with regulations.
However, the government could reduce thresholds in the future, making it harder for open-source AI developers to operate.
Overall, the impact of the Biden AI Executive Order on open-source AI remains uncertain.