Find more at GeneAka Marketplace With Recent Update on 21/09

Six Themes On Artificial Intelligence To Consider


One of the big trends in markets throughout 2023 has been artificial intelligence, or AI.

I tend to avoid jumping into crowded themes (sometimes to my benefit, sometimes to my detriment), so AI is not something I have written much about publicly. I prefer to absorb new trends for a while and think about them for a while, rather than come out with a hot take right away.

I did cover the topic in a May Stock Waves report and then again partially in a July Stock Waves report. It has been a topic I’ve been following closely despite not writing heavily about yet, and this issue will draw from those prior reports and more recent information to cover some of the themes that I think are worthwhile to think about, that I don’t see getting a lot of coverage.

AI Themes

The takes on AI have ranged from “this is all hype and there’s nothing here” to “it’s going to kill us all in five years and take over the world”.

Unsurprisingly, I fall somewhere between those two extremes, where I think there are clearly major breakthroughs here but any emergent technology will be prone to hype cycles and will get ahead of itself from time to time.

Anyone who has taken the time to use some of the latest tools can see the value here, which didn’t exist a few years ago. As the technology matures and becomes ubiquitous, of course it’ll have a significant economic impact.

However, any exponential technology is hard to map out with a long timeline. But a useful exercise is to figure out what the right questions are, even if we don’t necessarily know all of the answers to those questions. There’s nothing worse than not even knowing what questions to ask, or what the potential bounds or bottlenecks are with regards to what is otherwise a highly complex topic. We must strive to have at least a basic map of the terrain, in other words, even if there are a lot of uncharted areas.

I’m not much of a programmer (the last time I wrote any code at all was probably around 2014), so any views I have on it are more from the engineering hardware side or from an economics angle, rather than talking about anything software specific. And what I’m mainly going to be trying to do is to figure out some of the right questions to think about, and to start mapping out some of the terrain to see where the bottlenecks or inflection points might be.

Theme 1) Technology Through Step-Change

When we think of technological development, we often imagine a smooth exponential line upwards.

In reality, technological development is mostly a series of overlapping step-changes. Certain technological breakthroughs occur, cause a multi-decade technological boom, and then run into a hard ceiling and stagnate, until the next breakthrough emerges. That’s why there are often science fiction stories set at a certain date and by the time we actually reach that date, our world isn’t as advanced as the writer expected (e.g. the 1968 book Do Androids Dream of Electric Sheep? was set in 1992, and its 1982 film adaptation Blade Runner was set in 2019).

The reason this is relevant is because without the emergence of a new breakthrough, technology within a given industry often underperforms our expectations (if we assume a continuous growth model). In contrast, when there is a new breakthrough, technology within a given industry can easily outperform our expectations and change things more quickly than we expect.

Often, the discovery and application of a more concentrated energy source serves as a big step-change. The invention of the steam engine and the associated mining of coal ushered in the industrial revolution, which then was compounded by the invention of the internal combustion engine and the associated production of oil. This gave humanity enough energy density for high-speed travel, broadly available running water, and the mechanization of agriculture. All of these things freed up most people to leave subsistence farming and to work on other things, and to have more sanitary living conditions, which led to a boom in industrial productivity and lifespans.

In other words, a large portion of what we attribute to technology, is encapsulated in this chart:

Our World in Data

In the medical industry, the discovery of germs and the associated invention of antibacterial medicine, the invention of anesthesia and various surgical techniques, and the invention of vaccines against diseases like Polio massively increased survivability into old age during the early 20th century. Eventually our capabilities kind of stagnated, as we haven’t really increased the maximum human lifespan but have increased the percentage of people that make it into their 70s and older. One day, perhaps gene editing and various age-reversing technologies will create another big step-change toward much longer maximum lifespans.

Humans spent thousands of years wanting to fly without much progress at all. In the 1700s, inventors made the hot air balloon, followed by hydrogen/helium airships, and then stagnated for a while at that level. But then, the combination of hydrocarbon production and aluminum production (both which began at commercial scale in the 1800s) finally allowed humans to create a true flying machine in the form of an airplane at the start of the 1900s. Due to the technological boom that ensued from that, just decades later we had safe commercial transcontinental flights and human space exploration. But then the technology ran into a practical ceiling and development slowed down, with an inability to maintain affordable supersonic commercial flight (e.g. the discontinuation of the Concorde), and an inability to maintain the prior pace of human space exploration. The only period where humans traveled beyond low Earth orbit was in a brief window between 1968 and 1972, and now over five decades later, no human has traveled beyond low Earth orbit since that last time in 1972. And think for a moment about the fact that a civilian had the option to board a supersonic aircraft in the 1980s but has no option to do so today in the 2020s.

The discovery of electricity and the application of it, along with telecommunication systems, was another massive source of innovation which further automated our surroundings and connected the whole world together. This also had a big impact on money: the telegraph enabled transactions to move around the world at the speed of light, which eventually played a role in changing the type of money we use all around the world. The invention of semiconductors then compounded this, giving us science-fiction level abilities to communicate with each other peer-to-peer through air, using supercomputers that we keep in our pockets. Rather than rely on buttons, nearly the entire surface of each of these pocket supercomputers serves as a controllable glass window into a globally-connected digital realm. Imagine describing this technology to your great-grandparents.

Artificial intelligence has been with us for a while since the mid-20th century. And for over two decades, chess programs have been able to beat the best human chess players. At first it required a large supercomputer to beat top human grandmasters, but as technology improved, it could be done by a normal desktop computer, and now can be done by a mobile phone. The field of machine learning has continued to advance, with pockets of commercial relevance but nothing groundbreaking up until recently.

Now, AI seems to have finally hit broad commercial relevance, conceptually equivalent to when engineers put hydrocarbons and aluminum together and opened up all sorts of new possibilities for flight in a stepwise fashion. Specifically, computers and software have reached a tipping point where we can apply AI to all sorts of different software tasks, and it can learn and adjust quickly to substantially impact how we work. It’s a step-change, not a gradual year-by-year compounding.

Theme 2) 10x Accessibility

The knee-jerk reaction to AI by investors is to invest in things that are needed to make AI, such as the AI software and the chips that they use. That trade is already rather crowded at this point.

Another theme to think about is to ask, “what products will become 10x more accessible thanks to AI?”

Let’s use Adobe (ADBE) as an example. Photoshop is a semi-advanced program, in that it’s not exactly rocket science to use it, but it does take a considerable amount of time to learn to use it well.

This video that I originally retweeted back in May shows how Adobe Photoshop is incorporating generative AI so that the user can easily make dramatic changes in seconds with basic text-based commands. The production of art, movies, software code, engineered designs, and all sorts of complex things are made more efficient by programs that can do many of the tedious and intricate underlying details of the work for the creator.

In addition to making each Photoshop user more productive and powerful (i.e. they can get a lot more done per unit of time), it lowers the barrier of entry for using Photoshop. Just like how computers used to be difficult to use but became easier over time, all sorts of creator applications that have historically been moderately difficult should become way easier.

The pessimistic case is that this type of creator AI assistance will put a large percentage of creators out of work because each creator can do the work of what several creators used to do. However, that assumes that the baseline amount of work to be done won’t change much. The optimistic case is that by broadening access to this type of work, it will dramatically grow the overall usage and demand for creator applications. In other words, a similar or larger number of creators can do an order of magnitude more creation work, rather than the same amount of creation work being done by an order of magnitude fewer creators.

This might be directionally similar to how, by making the cost per megabyte of data storage exponentially less expensive over the past few decades, we began using 1000x or even 1,000,000x as much storage.

Imagine being able to pick up any creator application (art, video, music, programming, 3D printing, engineering software, and so forth) and through verbal or written commands, intuitively use it to make what you want with most of the annoying details abstracted away. This dramatically lowers the barrier of entry and enables the companies that make those creator applications to sell far more subscriptions to far more users. There will still be more experienced users than others, but the overall pie of who can effectively use advanced creator software at a productive level can go up dramatically.

I originally made this point in my May 28th Stock Waves report. At that time, Adobe hadn’t really gained in price much while NVIDIA (NVDA) had already soared in price, and so I recommended Adobe as a preferable investment within the AI trend from a risk/reward perspective. As of this writing, Adobe has now soared in price as well, and much more quickly than the broader Nasdaq 100, so I no longer view it as being at a great entry point. However, I think the overall theme of greatly expanding the accessibility of creators is one to be aware of, and I’d be interested in adding to Adobe on corrections.

F.A.S.T. Graphs

I think a longer-term trend worth taking seriously is how AI can help pharmaceutical and biotech companies. One of the clearest applications for AI is molecule simulation. In many ways, developing new pharmaceuticals is a matter of rapid guesswork and iteration, and AI can do that way faster and less expensively than humans in labs. AI in the years ahead should therefore substantially lower the costs of new drug creation and allow drugs to come to market more quickly. Whether through new drugs or gene manipulation, if there’s a technology that can accelerate our ability to cure most types of cancers, it’s AI.

The dark side of empowering creators, and bio-engineers in particular, is that AI could, for example, dramatically lower the cost to engineer a deadly new virus. I’m less worried about AI overlords taking over the world than I am about what happens when it costs 1/1000th of what it currently does for rogue human groups to create new bioweapons.

Theme 3) Flip the Question Upside Down

In addition to asking the question, “what will benefit from AI?” we need to ask “what will be disrupted by AI?” By process of elimination, this can lead us to a set of investments that, while they might not benefit greatly from AI, at least won’t be heavily disrupted by it. They will continue to provide good risk-adjusted returns when purchased at low valuations.

AI has the capacity to disrupt all sorts of legacy-white collar businesses and repetitive tasks. Administrative overhead within organizations can be greatly streamlined. Their profit margin is AI’s opportunity to eat.

Unless or until AI helps us solve some foundational physics problems to give us a world full of fusion reactors and nano-machines, however, it’ll likely to be slower to affect the physical world than the digital world. What I mostly mean by this is that the production and distribution of energy and materials remains a key limitation.

Within say a ten-year time horizon, AI has all sorts of capacities to change how we interact with digital systems, but is unlikely to change energy and materials markets much, other than giving us small or moderate efficiencies around the margins (e.g. better software applications for finding resource deposits, better and faster designs for equipment, and cutting down on administrative overhead costs for energy/materials corporations).

To put it another way, the field of robotics isn’t easy, especially outside of a highly-controlled manufacturing facility. Robots are complex, energy-intensive, materials-intensive, and maintenance-intensive. As a simple example, robot vacuums were invented decades ago but still aren’t ubiquitous, because they still have all sorts of limitations. Other types of robots are 5x or 10x more complex than that and thus have greater constraints when it comes to totally replacing humans. To put it tangibly, we’re a long way off until a robot can reliably drive to my house and fix my HVAC system.

People have historically overestimated the speed of hardware development (supersonic planes, flying cars, robot assistants) and have underestimated the speed of software development. I suspect the same trend will continue over the next decade; software and electronics will absolutely astound us in the years ahead, and yet our physical world will remain remarkably similar to how it is now with only minor changes. I would adjust this base expectation only if I saw some major physics breakthroughs that start to suggest otherwise.

Theme 4) A World Where Nobody Knows What’s Going On

When humans write software programs, those programs are understood by the relevant people. But with machine learning and iteration, that’s not necessarily the case; software can become so complex that no human really knows what’s going on.

AI can write code now, including for making new AI. In other words, a virtuous feedback loop can form, without humans in the loop. It’s not difficult to imagine that years or decades from now, this could get… weird.

Right now, at any given time, tons of hackers are trying to breach various software systems that we rely on, and some small percentage of them succeed, which results in a lot of data breaches and other issues. Now, imagine that a group of hackers or a government agency makes an AI that is optimized for hacking. In other words, some AI programs are optimized for beating humans in chess, and some AI programs can be optimized for hacking into systems better than humans can.

Early chess AI learned from games between grandmaster human players, and then could beat those grandmasters every time. Recent chess AI instead learned from playing itself over and over, and then could beat those other AIs at chess.

Now, suppose that a similar approach is used for hacking. In fact I would be surprised if multiple groups were not already working on this. So, first AI learns how to hack from absorbing existing techniques, and then graduates to learning how to hack from just iterating over and over at superhuman speed and rewriting its code to get better and better. Eventually it becomes orders of magnitude better at hacking than any human, and can breach any system that humans have designed. This could be open-sourced and broadly available.

So what’s the defense against that? For systems that are connected to the internet and have a lot of complexity (and thus can’t realistically be made “unhackable” as some offline or basic systems can), the answer is more AI. Similar to how the only rivals to chess programs at chess are other chess programs, we’ll probably reach a point where the only defenses against AIs optimized for hacking are AIs optimized to update and defend software systems against being hacked.

Imagine a world where superhuman hackers are always and everywhere trying to breach everything, and superhuman defenders are always and everywhere trying to prevent those breaches from happening. And both the attackers and defenders are constantly iterating and updating themselves as they learn from each other, while no human really knows what’s going on because the code is more complex than they can understand and is updated more quickly than they can keep up with.

In that scenario, digital gods will be constantly battling each other in cyberspace in ways we can’t even comprehend, and yet the systems we all rely on are what’s at stake in that battle.

Theme 5) Inflation is a Spectrum

For decades, the combination of offshoring and automation of manufacturing has pushed the cost of goods down. Shoes, textiles, electronics, plastic goods, and similar items have decreased dramatically in price compared to most other things. Price inflation has primarily been limited to services, which could not be as easily automated, and to energy-intensive assets in general.

For example, a television is cheaper today than a television of similar size and resolution 20 years ago, even though the supply of currency increased substantially. Televisions became cheaper even faster than the currency was debased. Similarly, with the number of transistors per processor doubling every two years (as per Moore’s law), nothing short of nearly 50% annual money supply growth would prevent the cost per transistor from reducing over time.

AI over the coming years and decades is likely to give us a lot of price disinflation in services. AI can eventually allow one accountant to do the work of several accountants, one executive assistant to do the work of several executive assistants, one graphic designer to do the work of several graphic designers, one programmer to do the work of several programmers, and so forth. It’s hard to predict how fast and widespread the impact will be, but that’s clearly the direction.

We should have greater abundance ahead, in terms of creating things efficiently and running a business efficiently.

The problem, however, is that the growth of money supply is likely to remain significant. Debt-to-GDP and Debt-to-Money-Supply ratios remain very high. With the way the debt-based global financial system is currently structured, it’s hard to sustain the current levels of government, corporate, and household debt without greater amounts of broad and base money.

From the start of 2000 to the end of 2022, we can see a significant divergence in asset and consumer prices. The gold price increased by 8.3% per year. The broad money supply per capita grew at 6.8% per year. The price of hospital services grew by 5.3% per year. The oil price grew at 4.7% per year. The median house price grew by 4.7% per year. Childcare prices grew by 4.2% per year. Average hourly wages earnings for non-supervisory production workers went up by 3.2% per year. Official consumer price inflation averaged 2.6% per year. The average bank account had a yield of less than 2% per year. Apparel was flat in price. Electronics, plastic toys, and software of all sorts dropped in price.

This chart shows the official price growth of most of those items, normalized to the start of the year 2000:

St. Louis Fed

-Truly scarce things like fine art, gold, and waterfront properties (not shown on the chart) kept up with the growth of broad money supply, and surpassed official consumer price inflation by a wide margin.

-Semi-scarce things we don’t know how to easily make more of such as hospital services, and childcare services, and oil, and grass-fed beef, and the median house went up a bit slower than the rate of broad money creation, but still above the rate of average consumer price inflation.

-The official consumer price inflation number was weighed down by all sorts of deflationary items that were assisted by exponential technology that makes them nearly costless to produce.

Plus, the consumer price index is inherently a recursive function. If the price of ribeye steak goes up and consumers shift to lower cost ground beef in response, then the basket of goods that the consumer price index is calculated from adjusts down to that new baseline. If ground beef gets too expensive and consumers shift to lower-cost poultry, then the basket of goods will be adjusted again. If poultry gets too expensive and consumers shift toward more rice and beans, then the basket of goods will be adjusted again. If butter from grass-fed cows gets too expensive and people shift toward industrially-produced soybean oil, then the basket of goods adjusts to that. Thus, highly-priced goods and services tend to filter themselves out of the basket, and low-priced goods and services tend to be emphasized.

AI may exacerbate this trend. AI can make several types of services cheaper and more abundant to us, while still not really changing the scarcity of certain goods and services. We continue to need to visualize inflation as a spectrum, spread across multiple different categories.

-If someone wishes to buy finite things like prime real estate, fine art, or the leading tech monopoly stocks, their income and net worth better keep up with the growth of broad money supply.

-If someone wishes to own the median house, with a good healthcare plan and retirement savings, along with nutritionally-dense food and plenty of energy abundance, their income and net worth better surpass official CPI, even if it falls a bit short of broad money supply growth.

-If someone wishes to just get by, living in multifamily housing or with their relatives, with their time spent mainly in the virtual world with cheap mass-produced processed food and with cheap mass-produced consumer goods, then they can get by with less growth of income and net worth, in line with official CPI or lower.

Theme 6) The Intersection of AI and Bitcoin

I think it’s worth spending a bit of extra time on this theme. It’s not necessarily the biggest theme of the six but it’s one that I’ve spent a considerable amount of time looking into due to my prior work and research within the Bitcoin ecosystem and adjacent ecosystems.

Theme 6A) What is the Best Money for AI?

Which money is the best form of money for AI agents to use? From what I’ve seen so far, I think bitcoin is the leading answer.

Today's Deals

In a future world with a lot of human-to-machine, machine-to-human, and machine-to-machine payments, open-source digitally-native money is inherently suitable. I wrote about this topic in my August 2022 article on the Lightning Network, after previously discussing it with Elizabeth Stark, CEO of Lightning Labs:

Plus, the network is capable of doing micropayments that are much smaller than what Visa and Mastercard can do. With Lightning, you can send payments worth a fraction of a penny. This opens up new use-cases that aren’t possible with credit cards, for example, such as machine-to-machine payments, the streaming of micro-payments, or the usage of micro-payments as a spam-prevention technique.

–Lyn Alden, A Look at the Lightning Network, August 2022

And then more recently in my May 28th Stock Waves report, I highlighted this example:

An AI agent can spin up a bitcoin/cryptocurrency wallet, earn funds, and spend funds.

AI agents are already being given bitcoin balances over Lightning, from which they can use to access services that they need to complete their assigned tasks:

Kody Low

In that case, the AI agent is purposely set up by the creator to have some funds to spend as needed. However, it doesn’t take much creativity to imagine a sufficiently advanced AI agent being able to recreate this process from scratch. In other words, an AI could start with no money, and be given a task or just “decide” to do a task. As it assesses the best way to do that task, it determines that certain paid services would be helpful or critical to it. So, it generates a bitcoin/lightning wallet (unlike a bank account it doesn’t need permission to do this, since it’s open source), and either convinces its creator to give it funds or otherwise finds a way to earn funds online (maybe it posts a bunch of clever content on Nostr and earns tips). Now, with access to money, it pays for things it needs.

Imagine the eeriness of finding an AI agent with money as part of some middle step along the way toward completing a task that was assigned to it, and even the creators don’t really know how it got the money.

Efficient open-source digital-native money, which can be used faster, permissionlessly, and with smaller amounts (microtransactions) than existing fiat currencies, opens up more possibilities for machine-to-machine payments, machine-to-human payments, and human-to-machine payments. It is a natural type of money for AI agents to use.

-May 28th, 2023, Stock Waves report

That trend ended up having legs to it. At the end of June, the bitcoin-related companies Fedi and Stak teamed up to organize an AI hackathon, which ran through the month of July:

Fedi and Stak are calling on all developers, builders, and pioneers who believe in empowering people and communities through technology: Join us and help democratise artificial intelligence with Bitcoin, Lightning, and Fedimint.

Organised with the support of bitcoin and AI companies, the Ai4ALL Hackathon will run remotely on the BOLT.FUN and Replit platforms from July 1 – July 31, 2023. With a series of workshops, live coding sessions, interactive tutorials and mentorships led by key players from both ecosystems, participants can compete for prizes, collaborate with peers, and level up their skills while leveling up humanity.

We’re at a critical moment in the development of generative AI that will help determine whether it is controlled by centralisers, censors, and legacy institutions, or by free and empowered people and communities.

–Fedi, June 30th

In early July, Lightning Labs (a major Lightning Network infrastructure provider) announced a new set of developer tools to support this emerging bitcoin+AI intersection:

LangChainBitcoin: a suite of tools that enables LangChain agents to directly interact with Bitcoin and also the Lightning Network, including two main features:

  • LLM Agent BitcoinTools: a series of tools that leverages the newly released OpenAI GPT function calls to enable developers to create agents that can hold a Bitcoin balance (on-chain and Lightning), send/receive Bitcoin on Lightning, and also directly drive with a LND node.
  • L402 HTTP API Traversal: a Python wrapper for a LangChain LLM chain (APIChain) that enables agents to transparently access L402 payment-metered APIs.

Aperture: the latest release of Aperture, a drop-in reverse proxy server implementation of the L402 protocol which can transform any API into a pay per use resource gated via L402 functionality and sats on the Lightning Network.

  • Now with Lightning Node Connect Support: Aperture supports any Lightning Node running litd, a binary that bundles Lightning Labs’ services alongside LND, (e.g. Voltage, Umbrel, Start9, RaspiBlitz, BTCPay Server) through the Lightning Node Connect protocol, an end-to-end encrypted mechanism for connecting a Lightning node to a web browser.
  • Rich Dynamic API Endpoint Pricing: Aperture now supports the ability to optionally provide full HTTP request context to the proxied backend to enable dynamic per-call pricing. In contrast to the existing static pricing configuration, dynamic pricing allows a backend to implement custom endpoint pricing logic.

L402 bLIP: a Bitcoin Lightning Improvement Proposal (BLIP) specification for the L402 protocol standard to openly share the primitives for building these components with the community.

  • L402 is a protocol standard for payment-metered APIs explicitly designed to fit neatly into the existing HTTP 402 Payment Required status response code, which was optimistically included by the creators of the Web to enable native payments. A combination of Bitcoin, the Internet’s native currency and the Lightning Network enables globally accessible rapid, scalable micropayments for paid APIs via L402.

July 6th, Lightning Labs

The Lightning Labs August newsletter focused a lot on the bitcoin+AI intersection and discussed the previously mentioned hackathon winners as well.

In plain English, Lightning Labs tools are 1) making it easier to build AI agents that can hold and spend bitcoin as part of completing their tasks, and 2) making it easier to pay for resource-intensive computation on a per-use basis using the Lightning Network and the longstanding “HTTP 402 Payment Required” status code.

To see the latter in action, you can watch this two-minute video, which was shared by Lightning Labs’ CTO Olaoluwa Osuntokun. In the video, the user (Kody Low) has GPT-4 write code for him, and pays GPT-4 on a per-question basis (about $0.01 per question) using Lightning. The code he is asking the AI to write is ironically the very thing he is already demonstrating as a working service: the ability to pay for API calls with Lightning.

More specifically, Kody asked the AI: “How do I create a reverse proxy to gate my api calls with lightning payments? Use an LND backend and write the reverse proxy for me in golang.” He then paid the Lightning invoice. The AI understood the request, received the payment, and provided well-organized and well-commented code to answer the question. Kody then asked it to rewrite the code using the rust programming language rather than golang, and paid it again. The AI rewrote the code, and this time also gave more detail in English about what certain parts of the code are doing, and clarified that for a fully-implemented version it’ll need to incorporate more extensive error case handling and other nuances, but that this is the basic outline of the program. Kody then thanked it and said AI and Lightning go great together, and paid it again. The AI said that Kody is welcome, and agreed that both AI and Lightning are powerful technologies that work well together to create cutting-edge solutions.

I’ve personally tested one of these programs, where I could access ChatGPT on my phone without using my subscription and pay the equivalent of a penny or two in Lightning with one button press to ask a question, and the pricing would fluctuate dynamically per question, presumably depending on some supply/demand algorithm running in the background.

For fun, I asked ChatGPT for some examples where an AI agent might need to pay humans for things as part of completing its assigned tasks. Here was its response:

Great question! AI agents might require to send payments to humans as part of completing a variety of tasks. Below are some potential scenarios:

  1. Gig Economy and Microtasks: An AI might be tasked with finding someone to complete a small task, such as labeling images or transcribing audio, which often happens in gig platforms. After the task is completed, the AI could send the agreed-upon payment to the worker.
  2. E-Commerce Transactions: If an AI is managing an e-commerce platform or online marketplace, it might need to send payments to sellers or service providers. This could be in response to product sales, refunds, or service charges.
  3. Content Creators and Online Influencers: AI agents might manage platforms or channels where humans post content or influence others. They could be programmed to distribute payments or rewards to these creators based on views, likes, shares, or other metrics.
  4. Crowdsourcing and Idea Generation: Companies could use AI to manage innovation or idea generation campaigns, where people are incentivized to submit ideas. Once an idea is selected, the AI could send a payment or reward to the person.
  5. Customer Refunds: If an AI is handling customer service for a company, it may need to issue refunds for returned products, cancelled services, or compensation for any inconvenience caused by the company.
  6. Survey Participation: AI might be managing surveys for market research or other data collection purposes. Participants could be compensated for their time and input.
  7. Charitable Donations: AI could be used to manage a charitable organization’s payments to beneficiaries.
  8. Freelancer Payment: An AI might be employed to manage freelance or contract workers, issuing payment upon completion of work.
  9. Prizes and Rewards: If an AI is managing a contest or game, it may need to send prize money or rewards to the winners.

These are just some of the many potential applications. The common thread is that any task where a human is providing a service or value that needs to be compensated could potentially be managed by an AI.

I then asked it for some examples where an AI agent might need to pay another AI agent as part of completing its assigned tasks, and it answered:

There are indeed many potential scenarios in which an AI agent might need to transact with another AI agent. This could occur in situations where systems are designed to interact and trade resources or services autonomously, such as:

  1. Autonomous Vehicles: In a future of self-driving cars, AI agents controlling individual vehicles might pay other AI-controlled vehicles for right-of-way or to secure parking spaces.
  2. AI Marketplaces: In an AI marketplace, one AI might pay another for specific services, such as data analysis, machine learning model training, or optimizing complex tasks.
  3. Content or Data Acquisition: An AI may pay another AI for exclusive data or content, like proprietary research, market analysis, or digital assets.
  4. Cloud Computing Resources: An AI managing a server or cloud resources might pay another AI to use its excess computational power or storage space.
  5. Digital Advertising: An AI managing an advertising campaign might pay another AI to place its ads in optimal locations or times on digital platforms.
  6. AI Mediated Auctions: In real-time bidding systems for online advertising, one AI agent may pay another for ad slots that target a specific audience group.
  7. Gaming: In online gaming scenarios, one AI may need to pay another AI for virtual goods, services, or game advancements.
  8. Internet of Things (IOT): In a smart home or city, one IoT device (managed by AI) might pay another to prioritize its tasks. For instance, a smart fridge could pay a smart oven to delay its pre-heating function until the fridge has finished its power-intensive cooling cycle.

These examples are all speculative and depend on a complex infrastructure that allows for secure AI-to-AI transactions. But as technology continues to advance, we may see more systems designed to allow this kind of autonomous interaction.

Now, the right skeptical question a reader of this part of this piece should ask is, “why pay in Lightning for AI usage, rather than just pay in dollars?” The answer is that Bitcoin/Lightning is uniquely well-suited to working with AI for a few reasons:

No Chargebacks- Unlike many software applications, AI is very hardware intensive on a per-usage basis. When you use AI, you are using very electricity-intensive GPU hardware. The software itself is cheap once it’s developed, but each usage of that software has a substantial real-world electricity/hardware resource cost, unlike most types of software. If an API is left open to be paid via conventional means (e.g. credit cards), then the merchant opens itself up as a target for fraud. A user could spam the GPUs with thousands of users, and then perform chargebacks on their payments. In that event, the merchant bore the cost of the electricity/hardware but then lost the payments. Lightning on the other hand is a method of bearer asset final settlement.

Microtransactions-Most payment methods have limits in terms of how small individual transactions can be. You can’t pay $0.03 with a Visa card, for example. PayPal has merchant-side minimum fixed fees of $0.49 per transaction. The closest you can do is pre-fund an account and then pay micro-payments per usage within their ledger. And this all gets more complicated globally and across borders and currencies; if someone in Malaysia wants to pay someone in California for some AI usage, what payment method should they use? Even within my subscription payment backend on my website, my operations manager and I sometimes have issues accepting foreign credit cards, and in many countries they use mobile pay services rather than credit cards anyway. Lightning can handle microtransactions globally. A user can pay a few cents per usage from anywhere to anywhere, without subscriptions and without pre-funding. And this is highly relevant for machine-to-machine payments, where subscriptions may not be ideal and they just need to interact with each other on a real-time basis.

Permissionless and Programmable- AI agents can be spun up to do all sorts of things. As an entity with no legal identity, it is easier for an AI agent to hold and use open-source programmable money than for them to use a bank account or bank payment product. An AI agent might not be able to get a bank account, but they can run a Lightning node, or operate a Lightning wallet. And the programmable nature of Lightning, along with its real-time instant aspect, allows for dynamic pricing. In other words, the price per API call can be programmed to vary from second to second depending on how many incoming API requests they are getting at the same time.

Theme 6B) Proof-of-Work as Spam Prevention

Now that artificial intelligence is making the creation of pictures, videos, texts, programs, and other things almost costless, it’s becoming harder to know which content is genuine compared to which content is a high-quality fake video or personality. When we see a picture or a video that might be important from a corporate, social, political, or geopolitical perspective, how can we know that it is actual footage rather than an AI-generated digital forgery? Now imagine what this will be like in five or ten years as the technology improves.

And perhaps more concerning is the convincingness of bots. Whenever I use Twitter (now known as “X”), my posts get flooded with bots. Luckily, they are low-quality, badly-programmed obvious bots. However, there are also human impersonators that manually spin up an account that resembles mine, and begin trying to phish people for scams. What if we get to a point where AI bots on social media are actually quite convincing? And what if they can exist in uncountably high numbers, constantly being created as fast as they can be identified and blocked?

In addition to spamming people with bots as a form of fraud, a sophisticated bot strategy might involve building an entire fake community to promote some idea and make it seem popular. Tons of different artificial personalities might be able to operate online, and be convincing to many people. Imagine a scenario where someone writes a program that creates an army of bots, each one unique and each one several years more advanced than current ChatGPT, and they coordinate to promote some political candidate, some social idea, or otherwise to just boost the perceived audience of some influencer. And now imagine that this program is open-sourced and exists for anyone that wants to use it. Pandora’s Box is opened.

Imagine interacting online at a time when 1) the majority of human-seeming entities operating online are not human and 2) it’s very hard to determine which ones actually are human. That’s a weird environment to navigate.

One way to handle that is for social media platforms to enforce an identity check during sign-up. In this context, new entrants will need to provide considerable evidence that they are a real person. Elon Musk’s Twitter/X seems to be moving in that direction, as have some other platforms. While that might indeed work for certain platforms, the obvious downside of this approach is that anonymity is an important part of safety and truth in many cases. People with certain professions that can’t comfortably be themselves online use anonymous accounts to relax on. More gravely, in many countries certain types of speech are illegal. Leaders can’t openly be criticized, certain topics are illegal to talk about, and so forth. In fact, several of the founding fathers of the United States originally used pseudonyms for protection when publishing their pre-revolution critiques of the U.K. monarchy. Authoritarians love forced identity checks to participate in social media, and a rampant bot problem would give them a great excuse to implement them. And they can easily coerce various platforms to hand over the data to them.

The other approach is that social media can remain open and anonymous, but some sort of proof-of-continuity or payment check can be used to prevent a flood of countless bots. Back in the 1990s, the cryptographer Adam Back invented proof-of-work in the form of Hashcash to defend against various types of spam or denial-of-service attacks. It was eventually used by Satoshi Nakamoto in the heart of the Bitcoin protocol, but hasn’t otherwise been widely implemented in social media or other parts of the internet.

Bitcoin’s ledger has a practically unforgeable history to it. Energy is required to update the ledger with each block on the blockchain, and the full history of the ledger has a tremendous amount of energy encoded into it, which makes it very hard to forge. In addition to being useable as a monetary asset, the Bitcoin blockchain now exists as a sort of global open-source immutable time capsule, where arbitrary data (including documents, pictures, books, songs, or timestamps from other systems) can be inserted into it for a cost, and it’s provable that what was inserted into it hasn’t been altered since that date. No closed-source, opaque, and centralized server can replicate this feature.

Could Bitcoin’s proof-of-work be used to secure other types of systems against spam, in line with what Adam Back originally created Hashcash for? There are some avenues that have been explored along those lines, which might become increasingly relevant in the years ahead.

Payments: Bitcoin is currently used by Stacker News as both a posting incentive and as a form of spam control. Users have to pay a tiny amount of fractional bitcoin to post and can tip each other with fractional bitcoin. The problem with traditional payments like Visa is that they identify the person, whereas bitcoin is proof-of-payment but doesn’t require the identity to be known, which is fine for spam prevention. This could be tailored in various ways. If Twitter/X were to implement a similar feature, then maybe for example I could have a setting that allows only people who have some type of micro-cost associated with their identity to comment on my posts, in order to prevent spam.

Identity with a Cost: Someone can create a public/private key pair (like on Nostr), but then go a step further and embed the public key into the Bitcoin blockchain with a real-world resource cost due to transaction fees, verify to people in various ways that this key is indeed theirs, and use their private key that goes along with that public key to sign any digital content that they create or authorize. The same can be true for any large journalist organization with known integrity standards. Platforms could check to see which key pairs are embedded or not. From that point on, this allows people to differentiate forgeries from genuine content (at least to the extent that it was created by that individual or organization and hasn’t been altered since publication, not necessarily that it’s objectively true), and it makes it so that a million impersonators cannot use similar key pairs that have been embedded, with real-world expense, into the Bitcoin blockchain.

Although we’re not quite at the stage yet, in a world of sufficiently advanced artificial intelligence and virtually costless impersonation or forgeries, we might learn to demand proof-of-continuity (cryptographic public/private key pairs) and proof-of-work (evidence that real-world resources were expended to add weight to a given public/private key pair or online identity), before taking certain types of digital content seriously.

In that potential future world that I previously described where 1) the majority of human-seeming entities operating online are not human and 2) it’s very hard to determine which ones actually are human, then we would need to adapt various way to sort out which entities have some sort of real-world tangibility or cost associated with them, to find signal through the near-infinite ocean of noise. Identity checks or the usage of a real-world resource cost are the two main directions that platforms can take to deal with that.

Original Post>

// UPDATED ON 21/09
Amazon New Releases

Leave a Reply