Back to Liminal Blog

New Feature Roundup: Scrape and Search Tools, Expanded User Roles, and More

Explore the latest updates to the Liminal Platform, designed to enhance generative AI utility while strengthening security, compliance, and governance.

Share On:
Share on LinkedIn
Share on Twitter

We are excited to introduce several new features now available as part of the Liminal Platform to help boost productivity and enhance your security capabilities! This release includes a powerful tool framework that allows users to leverage information outside an LLM's data set, model support for Amazon Nova, Claude 3.5 Haiku, Groq, and Grok, new functionality for even more granular role-based permissions, and improvements to our detection engine. These updates empower your team to derive greater utility from generative AI while bolstering your security, compliance, and governance control set.

Enhance AI Outputs Beyond Training Data

Large Language Models are inherently limited by their training data, which can quickly become outdated or may not include specialized information relevant to your needs. Our latest release introduces a new tool framework that significantly expands the capabilities of Liminal’s platform by equipping any LLM supported by Liminal to access data outside of that model’s training set. 

The framework is designed to work seamlessly with the wide array of providers and the 350K+ models supported by Liminal. We're launching this framework with two powerful tools, and many more are on the horizon.

Scrape Tool: Process Any Web Content With LLMs

The first tool in our framework, the Scrape Tool, enables any LLM to understand, analyze, and process content directly from any URL. This tool opens up a myriad of practical applications across different fields. In marketing, teams can leverage the tool to dissect competitor web pages, extracting insights on key messaging strategies and design elements. Educators can utilize it to gather and summarize information from educational websites, offering students a concise overview of complex topics. Rather than manually extracting and inputting web content into prompts, users can now simply provide a URL, and the selected LLM will perform the analysis and processing directly.

Search Tool: Augment Prompts with Web Searches

The second tool in our framework, the Search Tool, empowers the Liminal Platform to determine whether a web search can provide relevant, real-time data to support user prompts. For example, if a user seeks tax advice based on current regulations, the Search Tool can now pull up-to-date results about IRS guidelines to help the LLM deliver a more tailored output. Similarly, a researcher can get data from newly published articles incorporated into their response. Previously, interactions would rely solely on the model's training data, but now users can access the most current information available. 

These initial tools mark the beginning of an ongoing expansion of capabilities, setting the stage for more feature-rich tools that will continue to enhance how users interact with generative AI.

New Model Support

The future of AI isn't about finding one perfect model—it's about leveraging the right model for each specific task. Different models excel at different types of work, from rapid response chatbots to deep analysis of complex documents. Building on our existing support for over 350K models, we're excited to introduce support for three notable new additions to the Liminal Platform.

  • Amazon’s Nova (via AWS Bedrock) - excellent for multimodal tasks
  • Anthropic’s Claude 3.5 Haiku - optimized for code completion, chatbot interactions, data extraction, and content moderation
  • Groq - Innovative architecture enables high inference speeds and low latency for rapid responses
  • Grok - excels at understanding complex topics and handling long-form content

When using these or any other generative AI model through the Liminal Platform, you can ensure all content is cleansed before being shared with the LLM, allowing you to leverage the latest tools without compromising security.

Increased Control with Expanded User Roles

Security and governance in AI deployment require precise control over who can access what capabilities. Our latest update introduces even more granular role-based permissions, giving administrators unprecedented control over how their teams interact with generative AI.

Administrators can now define permissions at a more detailed level, allowing them to tailor access based on specific needs. For example, some team members might need the ability to create model agnostic assistants, but should not have access to logs, while others may require the ability to manage data privacy policies, but not model access permissioning. This granular control ensures teams have access to exactly the tools they need—nothing more, nothing less.

These enhanced permissions enable organizations to implement robust governance frameworks, ensuring compliance with internal policies and external regulations while maintaining operational efficiency.

Efficiency Updates to the Liminal Engine

In our ongoing efforts to deliver both robust data protection and a delightful user experience, we've implemented significant updates to the AI Startup engine. These improvements include:

  • Dramatic Speed Enhancement - We've integrated a new model into our detection algorithm, accelerating sensitive data identification by approximately 92%. This breakthrough means faster processing times and more efficient workflows for your team.
  • Improved Detection Accuracy - We've refined our classification system to better distinguish between sensitive data and similar but non-sensitive content. This update significantly reduces false positives in areas such as court cases, policies, doctrines, and scientific principles, ensuring more precise and reliable detection.
  • Formula Rendering - When working with mathematical content, presentation matters. Our new formula rendering capability automatically transforms LLM-generated mathematical formulas into clear, professionally formatted expressions, making technical documentation and analysis more readable and professional.

Get Started with Secure Multi-Model Generative AI Today

At Liminal, we're committed to advancing the capabilities of our platform to help your organization harness the full potential of generative AI while maintaining the highest standards of data security. These latest updates deliver enhanced productivity, expanded model choice, and strengthened protection for your sensitive information.

To explore these new features or schedule a personalized demo of the Liminal Platform, click here.