Partenaire
L'IA conversationnelle
IA

How do we use LLMs in our projects?

We mainly use LLMs in the context of chatbots dealing with end-users in an information-gathering domain or in a more generative way in an explorative project helping young people find purpose in life.

Because that's the most important thing.

ChatGPT helps students to find purpose in life (case study)

Ikigai+ is a chatbot designed to help students explore their core values, life purpose, and unique strengths through structured self-reflection and engaging dialogues with peers. Its goal is to enable students to apply their distinct talents to personally meaningful issues that impact the broader world, as emphasized by Damon in 2008.

The chatbot facilitates deep conversations powered by expert-trained AI models and uses validated questionnaires to assess career strengths. It encourages users to discover societal needs through guided questions and curated media recommendations. Users can discuss their findings with both the AI (named Spark) and their peers, which supports their journey towards identifying their "Ikigai"—a Japanese concept that represents a person's reason for being.

Ikigai+ is being used in educational settings, notably with a project at Cergy Paris University, which included Ikigai workshops for 40,000 students aimed at addressing societal and ecological challenges. These workshops are part of the university's larger initiative, CY Generations, which seeks to foster sustainable innovation through research, training, and entrepreneurship.

Feedback from students who participated in the Ikigai workshops has been overwhelmingly positive, with many noting profound insights into their personal and professional lives, improved self-awareness, and a clearer sense of purpose.

The Learning Planet Institute, which oversees the project, focuses on creating innovative learning and cooperation methods that cater to the needs of youth and the planet, supporting a variety of educational and social initiatives.

We built a Discord server connected to a Python-based framework that allows us to build reusable scenarios mixing scripted questions with LLM (OpenAI's GPT-4 is used) when we need to summarize, generate, or analyze unstructured answers from the students. Using Discord was a very good idea, as it reaches the targeted audience, costs five times less than a mobile application, and is perfectly suited for chatbots.

Voice AI + LLMs + Scripts to replace humans in repetitive calls & texts

One of the best uses of LLM (Language Model) in software is where it's needed to understand or produce language. However, when used in production, the main risk is hallucinations, so we limit its use to specific language tasks. Let's look at a real-life example.

LLM artificial intelligence to help salespeople

We found that salespeople from many temporary work agencies were spending 30 to 60 minutes daily dealing with workers who didn't show up for their shifts, causing problems for clients and financial losses. To tackle this, we made a system that automatically contacts workers before their shifts via text, call, or WhatsApp. If they don't respond, it offers the job to someone else. This system is linked to a database of workers, clients, and assignments, making replacement searches easier.

Are LLMs worth it?

LLMs are undeniably a game changer, ushering in a whole new set of features in software development. Their direct ability to grasp user intent marks a revolution. Now, instead of intricate forms and UI designs, all it takes is a simple text exchange to communicate with users via their preferred medium, be it email or messaging apps.

In our low-code and headless CMS projects, LLMs shine brightest in:

1. Structured information extraction from texts: This includes classification, identifying named entities, and extracting important elements such as dates, numbers, and destinations.

2. Text summarization: Particularly valuable in our headless CMS media projects, LLMs aid in creating article descriptions, generating titles, proofreading content, and extracting key facts.

3. Translation: Seamlessly integrated into our content workflows, translation tasks are handled efficiently, enhancing our headless CMS projects.

4. Intent understanding: By analyzing messenger conversations, LLMs excel at extracting user intent, enabling tasks like querying work hours for the previous month ("how many hours did I work last month?").

Which AI/LLM is the best?

We primarily rely on Claude 3 and OpenAI's GPT-4 engines, which we consider to be among the best available, competing closely with Google's Gemini. However, as the field evolves rapidly, what's considered top-tier can shift within weeks.

Large Language Models embedded in your applications

When integrating LLMs into your applications, consider whether they're a peripheral feature or the core of your app. Conversational interfaces, like those in WhatsApp or Telegram, offer convenience, but they're not always the best choice. While LLMs can improve user experiences, some tasks, like data reporting, are better suited to traditional formats like tables and charts.

However, LLMs excel at tasks like refining search criteria through conversational interactions. Also, consider leveraging email conversations for internal business apps to streamline processes, allowing users to request reports directly via email instead of navigating through menus in a web app.

ChatGPT alternatives for coding

While ChatGPT has made significant strides in coding assistance, there are several ChatGPT alternatives that offer unique advantages for coders.

GitHub Copilot, developed in collaboration with OpenAI, is an AI assistant that offers on-the-fly code suggestions. It integrates seamlessly with your coding process on GitHub, making it a preferred choice for many developers.

Another notable alternative is Cursor. This tool not only provides AI-generated code suggestions, but also offers features like code improvement and bug fixing.

For those seeking advanced AI-powered auto-suggestions, Tabnine is an excellent choice. It's trained on a large dataset of open-source programs, ensuring contextually relevant code suggestions.

Google Bard, developed by Google, is another powerful alternative. It's known for its improved performance in reasoning, coding, and multilingual capabilities.

Lastly, Megatron-Turing NLG stands out as a dedicated language model for expansive natural language generation, making it a potent tool for coding.

How do LLMs work?

Large language models (LLMs) like ChatGPT operate by predicting and selecting the next word in a sequence based on probabilities derived from extensive datasets. These models generate text that is contextually appropriate and stylistically coherent by using a system that calculates the likelihood of word sequences. This process is essential because the vast number of possible word combinations exceeds what is available in human-generated texts.

Large language models function not by memorizing text but by understanding the contextual use of words through patterns observed in their training data. This approach allows them to produce outputs that closely mimic human-written text, suggesting a simplicity and rule-like regularity in human language that these models can exploit.

In terms of architecture, Large language models often rely on transformer models, which are particularly effective for tasks involving human language. Transformers use an attention mechanism that helps the model focus on relevant parts of the input data when making predictions. This feature allows the model to manage dependencies and nuances in language use, which is critical for producing coherent and contextually relevant text.

Overall, the success of Large language models in tasks such as text generation indicates potential deeper insights into the structure and rules of human language, supporting broader scientific efforts to understand and model cognitive processes associated with language.

How we make LLMs better with "Tree of Thought approach"?

Tree of Thoughts, or ToT, is a new method that improves how language models solve complex problems. Traditionally, language models work sequentially, making one decision at a time, which can limit their ability to solve problems that need more strategic thinking or revisiting earlier decisions. ToT changes this by allowing a language model to explore multiple pathways or thoughts at once and then decide which path to follow based on evaluations it makes along the way. This method mirrors human problem-solving more closely, where we might consider different possibilities before deciding on the best course of action.

What makes ToT especially powerful is its flexibility and adaptability. It can generate a variety of potential solutions or "thoughts" and can backtrack or look ahead to make better decisions. This process is structured as a tree where each branch represents a potential decision path, and the model navigates through this tree to find the most promising solution. The tree structure also allows the model to evaluate and refine its choices continuously, which is crucial for complex decision-making tasks that standard language models struggle with. By simulating a more deliberate, thoughtful decision-making process, ToT significantly enhances the problem-solving capabilities of language models.

How to make sure that your LLM gives accurate answers on top of your content?

Retrieval Augmented Generation (RAG) represents a sophisticated fusion of traditional language models with dynamic information retrieval techniques to address the limitations inherent in the static knowledge base of pre-trained large language models (LLMs). This method significantly enhances the model's ability to process and respond to queries that demand up-to-date knowledge or domain-specific expertise that the LLM has not been explicitly trained on.

The core advantage of RAG lies in its architecture, which seamlessly integrates the retrieval of external documents into the generation process of LLMs. In practice, when a query is received, RAG first identifies and retrieves relevant documents or data fragments from a continuously updated external database. This could be from structured databases or vast unstructured datasets like the entirety of Wikipedia or specialized scholarly articles.

Once relevant information is retrieved, it is then fed into the LLM along with the original query. This combined input significantly enriches the context available to the model, allowing it to generate responses that are not only contextually richer but also more accurate and specific to the query at hand. This mechanism allows RAG to circumvent one of the major drawbacks of static LLMs — their inability to incorporate new knowledge post-training without undergoing a resource-intensive retraining process.

Furthermore, RAG systems can be finely tuned to specific application needs without extensive retraining. For example, in medical applications where the latest research findings might change the best practices for treatment, RAG can retrieve the most recent and relevant research articles to provide up-to-date medical advice. Similarly, in financial services, RAG can pull the latest market data and expert analyses to offer real-time financial insights.

From a technical standpoint, the success of a RAG system hinges on the effectiveness of both its retrieval mechanism and the subsequent integration of retrieved data into the language generation process. The retrieval component must be capable of understanding the semantic essence of the query to fetch pertinent information. Advanced vector embedding techniques, often fine-tuned on domain-specific corpora, play a crucial role here, ensuring that the retrieval process is both precise and aligned with the query’s context.

On the generation side, the challenge lies in effectively synthesizing the retrieved information with the original query to produce coherent, relevant, and factually accurate responses. This often involves sophisticated algorithms that can balance the input from the retrieved documents against the pre-existing knowledge encoded in the LLM, mitigating issues like information redundancy or the hallucination of incorrect facts.

In essence, RAG transforms LLMs from static repositories of pre-trained knowledge into dynamic systems capable of accessing and integrating external intelligence. This not only extends the utility of LLMs in real-world applications but also significantly enhances their performance in knowledge-intensive tasks where accuracy and currency of information are paramount.

Which AI is better than ChatGPT?

In the latest expansive evaluation of large language models (LLMs) ranging from 7 billion to 70 billion parameters, including industry leaders like ChatGPT and GPT-4, a meticulous comparison reveals nuanced insights into model capabilities, particularly in multilingual contexts and instruction adherence.

This comprehensive test incorporated a structured evaluation using professional German data protection training exams. The models were assessed on their ability to accurately translate and comprehend instructions in German, process the information, and then accurately respond to multiple-choice questions posed in English. This dual-language challenge was designed to mirror real-world applications where understanding and output precision across languages are crucial.

Model Performance Across Sizes

Larger models generally outperformed smaller counterparts, with 70B models showing superior accuracy and adherence to complex instructions. Notably, models like lzlv_70B and SynthIA-70B achieved perfect scores, indicating their robustness in handling nuanced, multilingual tasks.

Instruction Adherence

A critical aspect of the evaluation was the models' ability to follow specific instructions, such as responding with a simple "OK" or modifying the length of their responses based on the command. Here, the disparity between models was pronounced, with some failing to adjust their responses appropriately, underscoring the challenges in programming LLMs to handle dynamic conversational cues.

Format Sensitivity

The performance of the models also varied significantly based on the input format, suggesting that format optimization could be as crucial as model selection depending on the specific application.

Customization and Fine-Tuning

The results suggest that beyond raw processing power, the ability to customize and fine-tune LLMs for specific types of tasks—especially those requiring cross-linguistic capabilities and strict adherence to instructions—is essential for practical applications.

Model Selection for Specific Tasks

Depending on the specific requirements of a task, whether it be a creative generation or strict data privacy training compliance, different models and configurations may be more effective, highlighting the importance of targeted model selection in deploying LLMs effectively.

2024 Low-Code Benchmark : Top 50

Nous vous remercions ! Votre demande a bien été reçue !
Oups ! Un problème s'est produit lors de l'envoi du formulaire.

On vous offre 30 minutes de consultation!

Réservez votre appel
API
Performances
Contenu
SEO
Données
Application pour les consommateurs
Ingénierie logicielle
Sur site
Développement mobile
ERP
E-commerce
Recrutement
Cloud
Migration de contenu
IA
Frontend
CMS
Headless
Backend
Low-code
Applications professionnelles
L'IA conversationnelle
Éducation
Médias et édition
Santé
Services financiers
Grandes entreprises
Start-Up