21 December 2022 |

Exploring AI Inequality

By Ian Kar

Everyone in tech is excited about AI. That includes me—I’m using Lex to write this article, and have been playing around with a ton more AI tools for writing and content creation, including Bearly AI (mainly used to summarizing content, created by the homie Trung Phan) and Notion AI.

AI fever pitch is even seeping into fintech—much of Andressen Horowitz’s “Big Ideas In Fintech” for 2023 revolves around AI: AI credit counseling and finding balance between companies using Large Language Models and consumer trust in areas like credit underwriting. I’m a bit more apprehensive than A16Z around AI and fintech, mainly because of regulation (if you recall, regulation is our forte at Vol. 1 Ventures.) Based on chats with regulators and lobbying groups, I fully expect regulation in financial use cases for AI to be a hot topic in 2023 and beyond. There’s a ton of potential, but also a ton of room for misuse, particularly around fair lending laws (I’ll have more on that next year.)

(I asked ChatGPT about its thoughts on regulation’s effect on AI and financial services…here’s what it said:

“Regulation can play a significant role in the intersection of artificial intelligence (AI) and financial services in several ways.

First, regulation can help ensure the ethical and responsible use of AI in financial services. For example, regulatory bodies might establish guidelines or best practices for the use of AI in decision-making processes, such as loan underwriting or fraud detection, to ensure that these processes are fair and unbiased.

Second, regulation can help protect consumers from potential risks associated with the use of AI in financial services. For example, regulatory bodies might require financial institutions to disclose how they use AI in their operations and to establish measures to mitigate any potential negative impacts on consumers.

Third, regulation can help to create a level playing field for all financial institutions, ensuring that smaller institutions are not disadvantaged by the adoption of AI by larger institutions. For example, regulatory bodies might require larger institutions to disclose their AI-based processes or provide smaller institutions with access to certain AI-based technologies.

Overall, regulation can play a crucial role in fostering the responsible and ethical use of AI in financial services and helping to protect consumers and ensure a fair and competitive marketplace.”)

There are also a lot of cultural and moral quandaries with AI. AI clearly can help knowledge workers, particularly those working in software. We’ve already seen a ton of uses with ChatGPT—from debugging and even writing code to creating sales emails and whole articles.

But for the vast majority of Americans that a) aren’t in tech b) aren’t knowledge workers, I wonder if all these AI tools can come off as creepy and weird. It’s exciting to us because it brings hope to the tech ecosystem—we’re on the cusp of a new wave of innovation that’ll solve a lot of problems and create a lot of new ones. All those are opportunities for new companies and products to be built. So, of course it energizes the tech community.

But for average people I don’t think AI will have the same effect—a lot of jobs are going to be taken away by AI and a lot of folks are already finding things like ChatGPT weird and creepy.

There’s already a massive divide between tech people and non-tech people in the United States; AI unfortunately only exacerbates that. There are people who find technology exciting and cool, and people who find it lame and for nerds (frankly, that’s most people.) And what makes this gap more complicated is the rapid wealth generation tech has enabled for themselves—it’s good for people in tech (which is still largely a meritocracy) but alienates and divides people even more. The nerds getting far richer than most average Americans just isolates them even more. This is why most of the mainstream press and culture is either subliminally or overtly anti-tech—they cater to most of America.

Just take crypto for example—the crypto winter that we’re in now has been celebrated by people that aren’t in crypto, in tech and otherwise. Congress’s lack of understanding on crypto is only a taste of what to expect with AI—at least with crypto, there are some analogies people can make with the traditional financial ecosystem. Try explaining a LLM (Large Language Model) to a Congressman (or even one of their aids).

I’m excited that we’re all excited for AI. Like I said, I think there will be a ton of use cases. But unless we work on solving this divide, the ethical and moral issues around AI may become increasingly daunting.