Search articles
Browse The Library’s educational content by keyword or topic. If you ask about Savitzky-Golay filtering, Ada finds the relevant article and can retrieve specific sections.
Over the past few years, a new kind of technology has changed the way many people look for information. Tools like ChatGPT, Claude, Gemini, and others are built on what is known as a large language model, or LLM for short. You type a question in everyday language, and the model generates a response as if you were talking to a knowledgeable person.
At their core, LLMs are programs trained on very large amounts of text. During training, they learn patterns in language: how words relate to each other, how sentences are structured, how ideas connect. When you ask something, the model uses those patterns to produce a response that fits your question.
This has made many things more accessible. You can ask an LLM to explain a concept, summarize a long paper, draft a piece of code, or help you think through a problem. For a lot of people, it has lowered the barrier to knowledge that was previously scattered across textbooks, papers, and online forums.
Think about it: decades of textbooks, research papers, tutorials, and discussions, compressed into a single model you can just talk to. Knowledge that used to require years of study or the right connections is now a question away. We are entering an era where access to knowledge is approaching zero cost.
But the technology has real limitations, and it is important to be straightforward about them:
None of this means the technology is not useful. It means that using it well requires care, especially in scientific work where accuracy and traceability matter. When you read an answer from a general chatbot, you often have no easy way to verify where the information came from or whether it is current.
These are exactly the problems we set out to address when designing Ada.
Ada is the conversational assistant of the Lovelace’s Square ecosystem. She is named after Ada Lovelace, widely regarded as one of the first computer programmers and an early visionary of what machines could do.
You type a question in plain language, and Ada searches through The Library’s articles, The Square’s codes and datasets, and the platform’s knowledge base to build an answer grounded in real content.
The key difference between Ada and a general-purpose chatbot is that Ada does not rely on memory alone. She is designed to actively search the platform every time you ask something. If a new article was published in The Library yesterday, Ada can find it today. Her answers include links to the original sources when available, so you can verify the information yourself. That said, Ada is powered by GPT-5 Nano, a smaller model that allows us to keep costs very low as a non-profit, so she can occasionally get things wrong or misinterpret a question. We encourage you to check the sources she provides.

Pretty much anything related to chemometrics and the platform. Here are a few examples of the kind of conversations you can have:
You: Explain PCA in simple terms.
Ada: PCA (Principal Component Analysis) is a way to simplify complex data. Imagine you have spectra with hundreds of wavelengths. PCA finds the directions in that data where most of the variation happens, and lets you describe each spectrum with just a few numbers (called scores) instead of hundreds.
Think of it like this: if you photographed a building from many angles, most photos would look different, but a few key viewpoints would capture almost everything about its shape. PCA finds those key viewpoints for your data.
The math behind it involves decomposing your data matrix into scores and loadings… (continues with formulas and a link to the PCA article in The Library)
You: Do you have any baseline correction codes in MATLAB?
Ada: Yes! I found a few baseline correction implementations. Here’s what’s available on The Square:
- AsLS Baseline Correction by A. Gomez-Sanchez — Asymmetric Least Squares baseline correction for Raman and IR spectra. MATLAB implementation with adjustable lambda and p parameters.
If you want to understand the theory behind AsLS, The Library has an article on Baseline Correction that explains how the lambda and p parameters control the baseline flexibility.
You: K-means dataset — do you have one for testing?
Ada: I found some datasets you could use for clustering. Here’s one that works well:
- Olive Oil NIR — Near-infrared spectra of olive oil samples from different regions. Good for testing clustering methods because the samples naturally group by origin.
If you’re looking for more, I can search by category (spectroscopy, chromatography, etc.) or by keyword.
You: Show me how Gaussian smoothing works.
Ada: Gaussian smoothing works by replacing each point with a weighted average of its neighbors, where the weights follow a bell curve (a Gaussian distribution). Points closer to the center get more weight, and points further away get less.
Here’s an interactive visualization where you can adjust the sigma parameter and see how it affects the smoothing:
(An interactive Gaussian smoothing chart appears directly in the chat, with a slider to control the sigma value)
These are simplified examples. In practice, Ada’s answers include proper mathematical notation (rendered with LaTeX), direct links to articles and codes, and source references so you can check the original content yourself.
You do not need to understand any of this to use Ada. But if you are curious about what happens behind the scenes, here is the idea.
When you ask Ada a question, there is an orchestrator working behind the scenes. Think of it as a decision-making loop. First, the orchestrator reads your question and figures out what kind of information it needs: should it look for a Library article? Search for code in The Square? Look up a specific formula? Based on that decision, it calls the right tool, retrieves the results, and evaluates whether it has enough information to answer. If not, it goes back and makes another search, sometimes chaining several tools together before it has the full picture. Only then does Ada compose her response, grounded in the content she actually found.
This orchestrator can run up to 10 steps per question, calling different tools in sequence or even in parallel. You will see this happening in real time: Ada shows you what she is searching for as she works through your question.
Try the interactive demo below to see how this works for different types of questions:
Each question follows the same general pattern, but the specifics change depending on what you ask. A simple question about a method might need only one search in The Library. A broader question about a preprocessing workflow might require Ada to search The Library for articles and then The Square for code implementations, building up context before giving you an answer.
When we say Ada has “tools,” we mean she can do more than just chat. She can perform specific actions to find information for you:
Search articles
Browse The Library’s educational content by keyword or topic. If you ask about Savitzky-Golay filtering, Ada finds the relevant article and can retrieve specific sections.
Search codes
Find algorithm implementations on The Square. You can ask by method name, programming language (Python, MATLAB, R, Julia), or category (preprocessing, regression, etc.).
Search datasets
Find datasets for testing or research. Search by technique (spectroscopy, chromatography), domain (food science, pharmaceuticals), or keyword.
Explain formulas
Break down the mathematics behind a method step by step, with proper LaTeX notation. Ada retrieves the method’s description and explains the key equations.
Ada decides which tools to use based on what you ask. You do not need to specify anything. Just ask your question naturally, and she will try to find the best way to help.
Some things are also easier to understand with a picture. Ada can embed interactive visualizations directly in the conversation, the same kind you find in The Library’s articles. These are live charts where you can change parameters with sliders and see the effect in real time. If you ask “how does the smoothing window affect a moving average?”, she can show you an interactive plot right there in the chat.
Ada is not a general-purpose chatbot. She is designed to work exclusively with the content of Lovelace’s Square, which means she is more familiar with the terminology, methods, and context of chemometrics than a general assistant would be. She won’t give you cooking recipes or write poetry (though she might deflect the request with a chemometrics joke).
As the platform grows, Ada grows with it. Every new article published in The Library, every new code or dataset uploaded to The Square, becomes part of what Ada can search and reference. She is designed to stay up to date with the latest content on the platform.
Whether you’re a student encountering PCA for the first time, a researcher looking for a specific preprocessing implementation, or a professional exploring new analytical approaches, Ada is here to help you find your way through the ecosystem.