📢 Gate Square Exclusive: #PUBLIC Creative Contest# Is Now Live!
Join Gate Launchpool Round 297 — PublicAI (PUBLIC) and share your post on Gate Square for a chance to win from a 4,000 $PUBLIC prize pool
🎨 Event Period
Aug 18, 2025, 10:00 – Aug 22, 2025, 16:00 (UTC)
📌 How to Participate
Post original content on Gate Square related to PublicAI (PUBLIC) or the ongoing Launchpool event
Content must be at least 100 words (analysis, tutorials, creative graphics, reviews, etc.)
Add hashtag: #PUBLIC Creative Contest#
Include screenshots of your Launchpool participation (e.g., staking record, reward
Bringing LLMs to the Internet Computer
What Is the Ignition Milestone?
The Ignition milestone, part of the Decentralized AI track of the ICP Roadmap, focuses on connecting canisters to off-chain LLMs. With this update, developers can use AI in their dapps without a complex setup, thanks to AI workers.
What’s New in Ignition
LLM Libraries for Easy Integration
To simplify connecting your canisters to LLMs, we’ve added libraries in three languages used on the Internet Computer: Motoko, Rust, and TypeScript.
These libraries make it quick to use LLMs in your dapps. For example, here’s how a canister can interact with Llama 3.1 using just a few lines of Motoko code:
import LLM "mo:llm";
await LLM.chat(#Llama3_1_8B).withMessages([
#system_ {
content = "You are a helpful assistant.";
},
#user_ {
content = "How big is the sun?";
},
]).send();
The above example can be found and tinkered with on ICP Ninja. It shows how a canister can send a query to an LLM and get a response, with minimal setup.
Here’s another example showing how to call LLMs with tools:
import LLM "mo:llm";
actor {
public func example() {
let response = await LLM.chat(#Llama3_1_8B)
.withMessages([
#system_ {
content = "You are a helpful assistant."
},
#user {
content = "What's the weather in Zurich?"
},
])
.withTools([LLM.tool("get_weather")
.withDescription("Get current weather for a location")
.withParameter(
LLM.parameter("location", #String)
.withDescription("The location to get weather for")
.isRequired()
)
.build()
])
.send();
};
}
You can find a live demo of using tools here, as well as the source code of the demo here.
AI Workers
To connect canisters with off-chain LLMs, we’ve built a minimum viable product of AI workers, a simple system that lets canisters retrieve data from off-chain LLMs. AI workers handle the communication, so canisters can send requests and get responses from LLMs in real-time.
Currently, AI workers support Llama 3.1 8B, Llama 4 Scout, and Qwen 3 32B. These options give developers flexibility to choose the right LLM for their project.
Here’s a diagram illustrating how AI workers are used under the hood to relay prompts to off-chain LLM providers.
Real-World Use Cases
Developers are already using the LLM libraries to build creative dapps, such as:
These examples show how AI workers can support a range of applications on the Internet Computer.
Why This Matters and How to Get Started
The Ignition milestone makes it simple for developers to integrate LLMs into their Internet Computer projects, enabling new kinds of dapps, like chatbots, analytics tools, and AI-driven DeFi apps. By combining LLMs with Internet Computer features like Chain Fusion, HTTPS outcalls, and onchain randomness, developers can build creative and powerful solutions.
Ready to try it? Explore the LLM Chatbot project on ICP Ninja, check out the live demo, or dive into the code and examples in our repository.
Bringing LLMs to the Internet Computer was originally published in The Internet Computer Review on Medium, where people are continuing the conversation by highlighting and responding to this story.