Last week I attended the Fortune Brainstorm AI conference in San Francisco. When I entered the venue, I asked the hotel concierge, “Can you tell me where the AI conference is being held?” He smiled and said, “Which one?”
That sums up how many of us are feeling right now. Talk of AI is everywhere. It’s easy to get lost in extreme thoughts and near religious experiences when it comes to what AI could do and what we should do about it. If you wade in too deep, it’s easy to get pulled under by the current.
Conference themes may be planned, but to some extent they are also in the eye of the beholder. Thinking about last week and looking through my notes, here’s what made an impact on me.
AI – not just for data scientists and engineers anymore
“AI is not only Generative AI.” Rudina Seseri, Founder and Managing Partner of Glasswing Ventures
Rudina made a meaningful point. Somewhere along the way we have jumbled up all of AI into one non-differentiated basket. Using general sweeping terms is muddying the water and creating confusion and stress.
Ms. Seseri was part of a session on AI and Logistics together with Jon Chorley, Chief Sustainability Officer at Oracle, and Rebecca Yeung, Vice President, Operations Science and Advanced Technology at FedEx. All you need to do is look at FedEx or any advanced logistics organization or a large credit card company and it becomes clear that AI has been around for a long time! Predictive analytics have been fueled by AI. Robotics are getting more sophisticated. Rebecca Yeung had anecdotes of robots playing “three dimensional Tetris” every day as they loaded trucks.
Yes, that is amazing – but, what’s new? What has changed in the last year?
What has really changed is not only Large Language Models (LLMs) powering Generative AI but public awareness and – now – interaction with these models. According to Rudina Seseri, the difference with the introduction of Generative AI is mass consumerization.
And judging by his comments, Reid Hoffman, LinkedIn cofounder, former OpenAI board member, Greylock Capital partner, and Inflection AI cofounder would agree. “GPT 3.5 was around for a while, but the chat interface made it tangible.”
Proceed with caution – but proceed!
One of the major themes was not to get left behind. Reid took the risks seriously, but also warned that we should not “fumble the future” by overthinking AI risks.
Another speaker that I found incredibly insightful was Prem Natarajan, Executive Vice President, Chief Scientist, and Head of Enterprise AI at Capital One. He was balanced, calm, and wise. He urged us to approach AI (in all its forms) with “enthusiasm going hand in hand with thoughtfulness.”
Prem was part of a session on AI’s Impact on Fintech. He demonstrated his balanced perspective, in my opinion, when he discussed how “predictive AI makes mistakes too, but since it generates scores, we didn’t think of them as hallucinations.”
I had never really thought if it this way. As we got into predictive analytics, our models told us how seriously to take them. That is missing from Generative AI. I wonder if there is a way to recreate that – can Gen AI be taught to give us some kind of confidence interval?
This brings us to the next theme…
Can AI be used to mitigate the risks it creates?
One of the most interesting sessions was an interview with Sal Kahn, founder of the free online learning nonprofit Khan Academy. In discussing the introduction of Kahn Academy’s AI-powered tutor, Kahnmigo, he talked about the development process and some of the concerns that his team had. The biggest one was cheating. Would introducing AI enable students to cheat?
To address this concern, Kahn Academy built in safeguards to their AI tutor, reimagining the role of AI in the learning process. The AI tutor is the coach. It not only helps guide the student, but also reports back to the teacher. It gives the teacher insight on where and how the student struggled and what prompts helped the student to break through blocks they experienced.
The team was challenged to “turn their fears into features.” The idea of using AI to address AI concerns also came up related to cyber security and fraud detection.
Could AI be used in research to detect fraud and other research integrity issues?
What are other threats that either pre-existed AI or can reach new levels of harm with AI enablement? What could we do to combat that?
How does this impact our workforce?
That brings us to the next common thread that throughout the conference. More than one speaker said things like “people won’t lose their jobs to AI; they’ll lose their jobs to people that are using AI.”
My intuition is that both will happen. Some jobs will be lost to AI, it seems inevitable. But I do agree that many more jobs will be in jeopardy if we can’t prepare our workforce by figuring out how to use AI to enhance productivity, creativity, and discovery.
Paul Daugherty, Chief Technology and Innovation Officer at Accenture, said his “biggest worry is people that aren’t using generative AI” during his session: Using AI to Reimagine the World of Work. He talked about the need to “Reimagine work and how it happens. Don’t just add tech.” Consider where you need people and how you get them ready to be the human in the loop. He strongly advocated for investing more in people than technology.
“Reimagine work and how it happens. Don’t just add tech.” Paul Daugherty, Chief Technology and Innovation Officer at Accenture
Although much of what he said could be labeled consultant-speak, he made some important points. He advocated for taking a value-add approach versus a use case approach. My interpretation of that was to make sure you take a deeper perspective on your customer’s needs and your organization’s objectives. Don’t just do what you do faster and cheaper – consider whether you’re doing the right things.
He also talked about having an AI backbone (he said a “Gen AI backbone”, but I worry that might even be short sighted and potentially too narrow). However, his point is well taken. It’s never been more important to have clean, consumable, complete data. Data is the fuel for all AI methods. It’s past time to get our data in order!
Echoed by Jim Cathey, Chief Commercial Officer at Qualcomm, we need to be “AI first. It’s first, second, and third. It needs to be pervasive in the organization.”
The point that Paul made that resonated with me the most was this: “There is no AI-ready workforce you can hire – you need to bring them with you.”
Fun with Google
Sissie Hsiao, VP and GM of Google Assistant and Bard was the first speaker of the conference. As you can imagine there was a lot of chatter in the audience about Google’s release of Gemini and the authenticity of the demo.
Someone asked Sissie about the business model when Bard, Gemini, or whatever they package it as, replaces search. She said that they were separate use cases, and that targeted investigation is different than interaction with an AI.
From where I sit, what that video shows, albeit as an aspiration, is a natural and seamless interaction with a vast knowledge source. Why would anyone ever do a blunt, clunky search when they can just naturally interact with an AI?
So, I made that point and asked the question again: “What are Google’s plans to replace the ad revenue that is currently generated by search?” My interest was not only to try and gain insight into how Google might monetize us, but it was also about getting ideas for how we can monetize AI toward value and sustainability.
And finally, OSTP
I also really enjoyed the virtual session with Arati Prabhakar, Director of the White House’s Office of Science and Technology Policy.
What I found most interesting was her statement that “American leadership in the world requires American leadership in AI.” This is happening just as other parts of the world are developing models and coming up with regulations and provisions.
Will AI be multi-cultural? That one question just sent my mind spinning into possibilities. How do AI’s developed in different countries reflect their culture? Could we learn about other cultures not only by being there and interacting with people, but also through their AI’s? Would that open multi-cultural awareness to more people in the world?
I keep going back to what Prem said about approaching AI with “enthusiasm going hand in hand with thoughtfulness.” Yes, the risks are real.
The risks of running ahead blindly and not thinking about what we’re doing as well as the risk of shutting down because we don’t know the future and are worried about what it might hold.