The Scholarly Kitchen

What’s Hot and Cooking In Scholarly Publishing

  • About
  • Archives
  • Collections
    Scholarly Publishing 101 -- The Basics
    Collections
    • Scholarly Publishing 101 -- The Basics
    • Academia
    • Business Models
    • Discovery and Access
    • Diversity, Equity, Inclusion, and Accessibility
    • Economics
    • Libraries
    • Marketing
    • Mental Health Awareness
    • Metrics and Analytics
    • Open Access
    • Organizational Management
    • Peer Review
    • Strategic Planning
    • Technology and Disruption
  • Translations
    topographic world map
    Translations
    • All Translations
    • Chinese
    • German
    • Japanese
    • Korean
    • Spanish
  • Chefs
  • Podcast
  • Follow

Ask The Chefs: The US Executive Order on Artificial Intelligence

  • By Roy Kaufman, Todd A Carpenter, Ann Michael
  • Dec 4, 2023
  • 1 Comment
  • Artificial Intelligence
  • Copyright
  • Education
  • Policy
Share
Share
0 Shares

On October 30, the Biden Administration issued an Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence.” According to the Administration, “[t]he Executive Order establishes new standards for Artificial Intelligence (AI) safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.” We asked the Scholarly Kitchen Chefs for their thoughts on the Executive Order.

Computer components with AI and brain outline superimposed

Roy Kaufman

There has been significant governmental activity around AI, driven especially by the G7 Hiroshima process. In reading the Executive Order (EO), I was most interested in learning the Biden Administration’s approach on three topics: (1) copyright, (2) AI accountability, and (3) AI use in education.

The Executive Order kicked the can on copyright. The US Copyright Office (part of the Legislative Branch) is currently in the middle of a massive AI study process, and the Executive Order directs the head of the US Patent and Trademark Office (US PTO, part of the Executive Branch) to meet with the head of the USCO within six months of the Copyright Office’s issuance of any final report (traffic is bad in DC). At such time, the US PTO is directed to “issue recommendations to the President on potential executive actions relating to copyright and AI.” On the positive side, at least the EO acknowledged that copyright is relevant.

On accountability, as I noted previously in The Scholarly Kitchen, to reach its full potential AI needs to be trained on high quality materials and that information needs to be tracked and disclosed. While the EO could have said more on this topic, I was pleased to note that it includes language such as a mandate to the Secretary of Health and Human Services to create a task force whose remit includes ensuring “development, maintenance, and availability of documentation to help users determine appropriate and safe uses of AI in local settings in the health and human services sector.”

Finally, on education, I was happy to see the following:

To help ensure the responsible development and deployment of AI in the education sector, the Secretary of Education shall, within 365 days of the date of this order, develop resources, policies, and guidance regarding AI. These resources shall address safe, responsible, and nondiscriminatory uses of AI in education, including the impact AI systems have on vulnerable and underserved communities, and shall be developed in consultation with stakeholders as appropriate. They shall also include the development of an “AI toolkit” for education leaders implementing recommendations from the Department of Education’s AI and the Future of Teaching and Learning report, including appropriate human review of AI decisions, designing AI systems to enhance trust and safety and align with privacy-related laws and regulations in the educational context, and developing education-specific guardrails.

Students are not “one size fits all.” Students in my local school district speak 151 home languages other than English. Within each language group, including native English speakers, we have children from some of the wealthiest zip codes in America as well as a student homelessness rate of greater than 10%. In districts such as mine, which is diverse in terms of nearly every measure — including gender, racial, religious, and national origin — personalized and adaptive educational tools are needed. CCC’s work with schools and ed tech providers who license high quality content for AI-based rights is promising and we have experience with how districts especially would benefit from more federal support. Let’s hope it is forthcoming.

Todd Carpenter

The White House Executive Order on AI was extremely wide ranging and covered a wide swath of the AI-application landscape. This makes critique rather difficult (in short space). While I applaud the effort and focus, I am simultaneously troubled. I certainly am not an AI dystopian, believing that Artificial General Intelligence will soon be upon us and will by its nature decide the fate of humanity to our eventual doom. Nor will AI-based tools mean that people will not have any jobs in the future. Nor am I a luddite who is averse to technology. During the “Ask the Chefs” panel at the Charleston Conference, I stressed the point that we are on the rapid upswing, if not at the peak of the Gartner Hype Cycle. This is the worst time to be making formative decisions — and most particularly regulation — about the governance, shape and impact of a technology, since expectations are so out of step with reality. Two years ago, we could have wasted a lot (more!) of energy on blockchain. Open Linked Data didn’t solve all the cataloging and discovery problems. Content industries haven’t died off because peer-to-peer file sharing killed them. We need to balance the need for valuable guidance against the overly optimistic (or even harrowing) expectations of generalized AI.

However, there are real issues in the adoption and use of AI. Having a national strategy and coordinated approach across the U.S. Federal government is important. The plan outlines a number of reasonable and sensible approaches to the development of AI tools. Happily, from my perspective, the prominent place of necessary standards around AI Security and Safety as the first item listed among the actions to be pursued.  I am concerned though that the “red team safety tests”, will be too limited, too vague, or too onerous to implement to have much of an impact.

One area of interest for our community is the focus on the inclusion of AI generated text in documents to ensure their authenticity. This is a modern revisit of a problem that has existed since the early days of digital publishing: Is this digital object authentic? In January of 2000, CLIR organized a meeting on this question and produced a report on the question.  It has been the focus of a variety of archival projects, such as InterPARES and iTrust, as well as the PROV work at W3C.  An initiative that is currently seeking to address part of this issue, the Coalition for Content Provenance and Authenticity (C2PA) is seeking to advance standards that signal whether an object has been digitally altered. Having a structure that communicates this information is valuable to an extent, but it is not a panacea. Just this week, Adobe came under fire for distributing stock photo images of the Israel-Hamas conflict that were generated by AI. Adobe defended itself by saying that the content was flagged as machine generated using C2PA protocols. Metadata is only a solution when people know that metadata exists, what it means and what to do with that information. Unfortunately, the public, as well as many content distributors, fail on all three of those things. This highlights the fact a metadata service is nearly useless if no one populates the fields, uses them in practice, and then provides that information to users/readers.

Earlier this fall, NISO hosted a forum focused on the role of our community in the AI discussions. Many of the ideas generated by that meeting were focused around the questions of trust that were mirrored in the Executive Order. It makes sense for us to establish metrics by which we assess the quality of training data sets, or the validity of outputs. Each community should determine a set of criteria and signaling mechanisms (like CRediT) for the role that AI tools were used in the creation of a piece of intellectual property, particularly in scholarly communications where tracing intellectual provenance is critically important.

Also of concern for our community is the lack of attention or concern around the legal and copyright issues that are surrounding the use of these systems. The legal status of the content that are created by using these tools is ambiguous at best and guidance on development of those legal frameworks would have been valuable. Similarly, the status of the reuse of the content that is ingested in training these systems would have been a useful area for focused study and guidance.

Despite the wide scope and coverage of the announcement, notably missing from the list of agencies that are mentioned in how federal agencies are applying AI or will require guidance are any that engage with national security. There is no mention of the use of AI by the National Security Administration, which was the focus of the many illegal data collection programs that were made public by Edward Snowden and continues a massive data collection effort. NSA publicly acknowledges (which it is not known for), massive data collection efforts are continuing in its Utah data collection facilities, storage measured in yottabytes of data storage and processing that is nearing — if not in excess of — exaflop (i.e.1 quintillion instructions per second) processing power.  Without a doubt, this facility is applying AI tools in the analysis and decryption of communications, to include not only metadata, but content of exchanges. It isn’t a stretch to presume that these same tools and techniques will be applied against US citizens by other agencies, if only at a more “modest” scale. The public NSA site also proclaims proudly it is seeking to crack 256-bit encryption in the coming decade, which underpins all secure communication on the web, which would mean that even the minimal privacy protections our online exchanges enjoy through security (sadly, NOT constitutional rights) will disappear. Similarly, the Defense Department is also not mentioned in the AI guidance. Much of the robotic development  has been driven by DARPA research and funding that created the fields of autonomous cars and machines guided by AI. If one of the main concerns about AI is the application of autonomous lethal decision-making by machines without independent governance, to proclaim an AI safety initiative that doesn’t even mention the national security agencies is a gaping hole that one could fly an AI-guided Reaper drone through.

Ann Michael

There’s so much in this Executive Order (EO) that it’s hard to figure out where to start. My overall impression is that it is comprehensive. It seems to have been written by people that generally understand what’s happening in AI right now.

The EO includes cautions, acknowledgements, explorations, incentives, and actions. It covers how the US can grow capabilities in government, in commerce, in small business, and as individuals (with programs proposed to support American workers). What I don’t see is hype and fear mongering.

Most of the actions are bound by time frames of 270 days or fewer. This does not seem like a typical timeframe for government action – it feels fast. It feels like an attempt to jump in and get moving.

The full EO is 56 pages, but the Fact Sheet is only 6 pages and it’s worth a read. It’s a good summary of the areas covered. There are way too many specifics to explore in this post, so I’ll focus on what this might mean for existing models.

Most of the regulations put forward are for dual-use foundation models defined in Sec. 3 (k) (i-iii) as:

“…an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters…” [emphasis added]

Tens of billions of parameters is a lot! Only the largest models, with the names you know, are that large…at least for now.

  • Llama 2 has a few different options, the largest of which has 70 billion parameters.
  • GPT – 3 has 175 billion parameters and GPT-4 is rumored to have 1.7 trillion parameters.
  • Anthropic’s Claude has been estimated at 130 billion up to 175 billion.
  • Google Bard comes in around 137 billion parameters.

Except for Meta’s models (Llama), the models listed above are closed (versus open source). As closed models, the organizations that own them are likely in a better position to comply with regulations. They have dedicated staff, investors, and business models (or will likely have them one day 😊). For open source uses, compliance may be more difficult.

What are these large models being required to do?

Sec 4.2 (a)(i) Within 90 days from the date of the EO – “Companies developing or demonstrating an intent to develop potential dual-use foundation models to provide the Federal Government, on an ongoing basis, with information, reports, or records regarding the following:”

  • Notify the federal government when they are training their model
  • Conduct and share results of red-team safety tests
  • Demonstrate physical and cybersecurity measures taken to protect model weights

Weights are critical to fine tuning a model. Very basically, they govern the influence that inputs have on outputs. They are of concern in the EO because, when/if manipulated, they can be leveraged to circumvent safety features within a model. However, sharing weights and learning from them greatly increases the efficiency and productivity of experimentation. This is particularly true for open source models.

The requirements for model owners have given rise to debates. Do the requirements entrench incumbents? Do they limit innovation? Will they hamstring open source?

Some note that these regulations may hamper U.S. research but that isn’t going to slow China down – or anyone else. Will limiting access to weights only result in putting fewer weights in the hands of the U.S. research community?

Martin Casado, a General Partner at Andreessen Horowitz posted a letter that he, and some prominent names in AI, research, and finance sent to President Biden.

“…the EO overlooks our primary concern: ensuring AI remains open and competitive, rather than monopolized by a few entities. Moving forward, it is critical that the U.S. remain committed to anti-monopoly ideals in how it formulates AI-related policies.”

There have also been references to “regulatory capture.” With many observers noting the friendliness of the government and the AI giants. Sam Altman, CEO of Open AI, has spent many hours meeting with government officials in the U.S. and abroad. In an article in the New York Times, How Sam Altman Stormed Washington to Set the A.I. Agenda, it notes:

“And instead of protesting regulations, he has invited lawmakers to impose sweeping rules to hold the technology to account.”

A more recent take reported by The Information (paywall), during an interview with executives from Hugging Faceexpressed that having requirements tied to compute usage may not be effective, “given how powerful AI chips are becoming, those reporting requirements could soon apply to many more companies than originally thought.”

“It may be more helpful to require startups to document and disclose information about their models, like their limitations, biases and intended use cases, regardless of their size”
From The Information. Opinion of Irene Solaiman, Head of Global Policy at Hugging Face.

What is the bottom line? I think the fundamental question is: Do you believe that AI needs regulation? If the answer is yes, then the harder question is how do we do that? What are our core principles and their priorities? Whose interests are paramount?

When dealing with large sweeping issues it seems almost impossible to get it 100% right at all – let alone in your first attempt. I believe the innovators and the open source advocates have some very valid points. However, any regulation will, by definition, add restrictions.

It will be interesting to see if the EO is revised at all based on feedback. I’d also like to see some provisions to monitor progress and impact of the EO itself with a commitment to adjust it – fine tune it if you will – as we learn more and gather more evidence. Perhaps iteration and adjustment is a lesson the Federal Government can learn from software development.

Share
Share
0 Shares
Share
Share
0 Shares
Roy Kaufman

Roy Kaufman

Roy Kaufman is Managing Director of both Business Development and Government Relations for the Copyright Clearance Center (CCC). Prior to CCC, Kaufman served as Legal Director, John Wiley and Sons, Inc. He is a member of, among other things, the Bar of the State of New York, the Author’s Guild, and the editorial board of UKSG Insights. Kaufman also advises the US Government on international trade matters through membership in International Trade Advisory Committee (ITAC) 13 – Intellectual Property and the Library of Congress’s Copyright Public Modernization Committee in addition to serving on the Board of the United States Intellectual Property Alliance (USIPA).

View All Posts by Roy Kaufman
Todd A Carpenter

Todd A Carpenter

@TAC_NISO

Todd Carpenter is Executive Director of the National Information Standards Organization (NISO). He additionally serves in a number of leadership roles of a variety of organizations, including as Chair of the ISO Technical Subcommittee on Identification & Description (ISO TC46/SC9), founding partner of the Coalition for Seamless Access, Past President of FORCE11, Treasurer of the Book Industry Study Group (BISG), and a Director of the Foundation of the Baltimore County Public Library. He also previously served as Treasurer of SSP.

View All Posts by Todd A Carpenter
Ann Michael

Ann Michael

@annmichael

Ann Michael is Chief Transformation Officer at AIP Publishing, leading the Data & Analytics, Product Innovation, Strategic Alignment Office, and Product Development and Operations teams. She also serves as Board Chair of Delta Think, a consultancy focused on strategy and innovation in scholarly communications. Throughout her career she has gained broad exposure to society and commercial scholarly publishers, librarians and library consortia, funders, and researchers. As an ardent believer in data informed decision-making, Ann was instrumental in the 2017 launch of the Delta Think Open Access Data & Analytics Tool, which tracks and assesses the impact of open access uptake and policies on the scholarly communications ecosystem. Additionally, Ann has served as Chief Digital Officer at PLOS, charged with driving execution and operations as well as their overall digital and supporting data strategy.

View All Posts by Ann Michael

Discussion

1 Thought on "Ask The Chefs: The US Executive Order on Artificial Intelligence"

Great article and excellent arguments. I think we need to prepare for a future wherein:
1. We will be unable to tell whether a human or AI produced a digital document or not.
2. We will be unable to easily and cost-effectively assess whether the scientific basis for certain findings a) has been produced by AI or not and b) is actually making sense https://www.nature.com/articles/d41586-023-03635-w
3. Researchers will have used AI also for decision-making processes with AI having ‘nudged’ them towards conclusions that are (more) aligned with the AI’s guardrails. And we won’t be able to detect that this has happened. https://www.hec.edu/en/knowledge/instants/nudges-and-artificial-intelligence-better-or-worse

  • By Emanuel Raymond
  • Dec 4, 2023, 7:52 AM

Comments are closed.

Official Blog of:

Society for Scholarly Publishing (SSP)

The Chefs

  • Rick Anderson
  • Todd A Carpenter
  • Angela Cochran
  • Lettie Y. Conrad
  • David Crotty
  • Joseph Esposito
  • Roohi Ghosh
  • Robert Harington
  • Haseeb Irfanullah
  • Lisa Janicke Hinchliffe
  • Phill Jones
  • Roy Kaufman
  • Scholarly Kitchen
  • Alice Meadows
  • Ann Michael
  • Alison Mudditt
  • Jill O'Neill
  • Charlie Rapple
  • Dianndra Roberts
  • Roger C. Schonfeld
  • Avi Staiman
  • Randy Townsend
  • Tim Vines
  • Jasmine Wallace
  • Karin Wulf
  • Hong Zhou

Interested in writing for The Scholarly Kitchen? Learn more.

Most Recent

  • Language Evolves, or rather, Constantly Cooks New Ways to Pass the Vibe Check
  • A Tumultuous Week at the Library of Congress
  • Guest Post — Fostering AI Adoption and Literacy Within Your Organization 

SSP News

Get Your Tickets to the EPIC Awards!

May 14, 2025

Get Ready for SSP 2025: Innovation, Swag, and Scholarly Networking!

May 13, 2025

Baltimore Beyond the Conference: Local Tips from Two Insiders

May 7, 2025
Follow the Scholarly Kitchen Blog Follow Us

Related Articles:

  • blade runner scene SSP Conference Debate: AI and the Integrity of Scholarly Publishing
  • Battlestar Galactica Cylon robot with Joe Esposito's head. Who Is Going to Make Money from Artificial Intelligence in Scholarly Communications?
  • hand putting in last piece of puzzle, which is brightly colored and reads "AI" Approaching Artificial Intelligence and Open Research in Sync: Opportunities and Challenges

Next Article:

book covers Chefs' Selections: Best Books Read and Favorite Cultural Creations During 2023, Part 3
Society for Scholarly Publishing (SSP)

The mission of the Society for Scholarly Publishing (SSP) is to advance scholarly publishing and communication, and the professional development of its members through education, collaboration, and networking. SSP established The Scholarly Kitchen blog in February 2008 to keep SSP members and interested parties aware of new developments in publishing.

The Scholarly Kitchen is a moderated and independent blog. Opinions on The Scholarly Kitchen are those of the authors. They are not necessarily those held by the Society for Scholarly Publishing nor by their respective employers.

  • About
  • Archives
  • Chefs
  • Podcast
  • Follow
  • Advertising
  • Privacy Policy
  • Terms of Use
  • Website Credits
ISSN 2690-8085