Updates to the Chatbot default model

Jun 23, 2025 jeff's Blog


Changes to the Chatbot Component

The area of Large Language Models (LLM) is an evolving one, with new providers and models being made available periodically, and older ones being shutdown.

The Chatbot component permits you to select from several Large Language Model (LLM) providers and models. The available selection changes over time as we add support for talking to the various providers and models.

Chatbot uses a proxy service run by MIT which offers a consistent interface to MIT App Inventor Apps. It handles the nuances of how to talk to each of the various providers.

MIT offers a default free quota, currently 10,000 tokens per day.1 We also provide a mechanism for some LLMs to provide your own API key, which will disable the quota. Of course, in this situation, you need to create an account with the associated provider and pay whatever their fee structure is to obtain the API key.

When we first released the Chatbot component and related proxy, we only had support for OpenAI and ChatGPT. We also mistakenly referred to the ChatGPT provider as “chatgpt” when we should have used “OpenAI.”

That said, if you did not explicitly choose a provider and model, we would default to OpenAI and one of the ChatGPT models, currently “gpt-4o-mini.”

What We Are Changing

We have recently made a change to the default provider and model. If you use the defaults, we will now be using the “meta.llama4-maverick-17b-instruct-v1:0” model via Amazon’s “Bedrock” service. We made this change because we have received a grant from Amazon for education and research. We believe this model is as good as or perhaps better than the previous default.

Some subtleties

If you use the default, which sets the provider explicitly to “chatgpt” and do not select a model (leaving the model field blank, or set to “default”) we will use the llama4 model. Unfortunately, this is an artifact of how we originally set up the Chatbot component, the default is always “chatgpt.” But if you provide an explicit model, then your request will continue to go to chatgpt. Similarly, if you provide your own OpenAI APIKEY, your requests will continue to be routed to chatgpt.

Existing Apps are Affected

Because this change is made in the proxy, existing applications, including those packaged and running on devices, will see the impact of this change.

Footnotes

1 Actually, 10,000 tokens over the last 24-hour period