3.5 C
New York
Saturday, November 23, 2024

Supercharge AI models with advanced concept maps


This blog post focuses on new features and improvements. For a complete list, including bug fixes, please see the Release Notes.

Advanced Concept Management

A concept represents an entity, similar to a “label” or “keyword”, and is also referred to as a “class” in machine learning. Concepts can be used to annotate inputs containing the associated entity or added to a model for entity recognition. Data tied to these concepts helps the model learn and understand the entity.

On the Clarifai platform, you can upload your contributions, create concepts, and manage various tasks within your application. We now have introduced a new way of defining conceptual relationships in Clarifai, allowing you to define hierarchical and semantic links between concepts. This helps build robust knowledge graphs and enable more advanced computer vision and natural language processing capabilities.

By mapping concepts as synonyms, hyponyms, or hypernyms, you can create context-aware models that deliver more accurate results. This is especially useful for managing data, building search engines, and organizing large data sets, ultimately improving the effectiveness of AI applications.

Hyponym (Subtype relationship):
It represents a “type” relationship, such as “Dog” being a type of “Animal”. This hierarchy helps models generalize or specialize in identifying concepts.
Use: Hyponyms refine search results; a search for “Animal” might return “Dog,” “Cat,” and other animals.

Hypernym (supertype relationship):
The opposite of a hyponym, it indicates a broader category or parent concept such as “Animal” being a hypernym of “Dog” or “Animal” being a parent of “Dog”.
Use: Hypernyms group specific entities under broader categories, which helps in data organization.

Synonym (Equivalent Relation):
Links concepts with the same or similar meanings, such as “puppy” and “cub.”
Use: Synonyms ensure that different terms for the same concept are treated as equivalent, which improves search accuracy.

Below is an example of how to add and manage conceptual relationships in the Clarifai application using the Python SDK.

Explore this guide and included laptop to learn how to add conceptual relationships to your data and follow it step by step.

New model released: Prompt-Guard-86M

Applications powered by LLM may be vulnerable to advertisement attacks, where malicious advertisements are designed to manipulate the behavior of the model against the developer’s intentions.

There are two different quick attacks:

  1. Fast injections: These are inputs that take advantage of the combination of untrusted data from third parties or users in the context of a model, causing the model to follow unintended instructions.
  2. Jailbreaks: These are malicious instructions designed to bypass a model’s built-in security features.

Prompt-Guard-86M is a multi-language classifier model designed to detect and prevent these prompt injections and jailbreak attacks on LLM-powered applications. The model is trained on a comprehensive corpus of attack data to detect both explicitly malicious messages and those containing subtle injected inputs.

As a multi-label model, it classifies inputs into three distinct classes: benign, injection, and jailbreak, providing a robust mechanism to mitigate risks in LLM-powered applications.

The template is now available on the Clarifai platform. We have some pre-built examples to get you started. Try it out! Prompt protection 86M!

Xnapper-2024-09-10-15.43.41

Input viewer

Added option to download original asset from Input-Viewer

  • You can now download the original asset directly from the Input-Viewer page. If you right-click on the canvas area, a button will appear that allows you to download the original input. Previously, only masked versions could be downloaded.
    Xnapper-14.00.43-09.2024

Improved annotation identification in Input-Viewer

  • When hovering over an annotated object or selecting it in the Input Viewer canvas, the corresponding annotation is now highlighted in the right sidebar. This makes it easier for users to quickly identify the annotation for deletion, editing, or other purposes.
    Xnapper - September 14, 2024

Some agent system operators have been discontinued

Agent system operators are fixed-role operators that act as “non-trainable models.” They help you connect, direct, and network your models in a workflow.

We have deprecated the following agent system operators: Custom code operator, AWS Lambda, Isolation operator, Tesseract Operator, Neural tracker and Neural Tracker Lite.

Learn more about agent system operators and how you can use them with workflows for various use cases. here.

Additional changes

Ready to start building?

In the Community Platform, you can upload your contributions, create concepts, and manage various tasks within your application. Verify Our Quick Start guide on the Community Platform. Register here For a start!

If you have any questions, please send us a message on our Community Discord ChannelThanks for reading!



Related Articles

Latest Articles