We’re excited to announce a suite of powerful new features now available through the Liminal Platform. These updates include model-agnostic secure document upload functionality, support for OpenAI’s latest GPT-4o model, and real-time streaming in our SDK - all designed to help your workforce maximize the value of generative AI while ensuring your sensitive data remains protected.
Securely Upload Documents to Any Model
Whether you want to analyze and summarize large sums of financial data, provide detailed next steps for a patient based on medical history, or identify areas for improvement based on customer service survey results, document upload is a powerful capability that allows users to easily provide detailed, context-specific data to a generative AI model in order to drive more accurate and relevant responses. Liminal’s latest update enables users to securely upload PDF and DOCX files for use with any LLM.
Automatic Detection of Sensitive Information
The Liminal Platform automatically identifies and applies appropriate policies (redact, mask) to sensitive information in your documents, without user intervention.
Seamless User Experience with Rehydration
Protected sensitive data is automatically re-inserted back into the model responses, ensuring a seamless and consistent user experience.
Multi-Model Support
Upload documents to any model, even those that don't currently support document upload in their API, providing flexibility and ease of use.
GPT-4o Support
The future of generative AI is multi-model. As organizations increasingly leverage diverse AI models tailored to specific tasks, and users bounce between tools that better deliver on their specific preferences, Liminal’s model agnostic approach means that organizations can confidently set and manage security policies across all LLMs, including OpenAI’s recently released GPT-4o.
Multi-Model Access
With GPT-4o, Liminal expands its already extensive catalog of over 100,000 supported models.
Enterprise-Grade Security
When using GPT-4o, or any other generative AI model through the Liminal Platform, you can ensure all content is cleansed prior to being shared with the LLM, allowing you to leverage the latest tools without compromising security.
Real-Time Streaming in the SDK
Liminal’s SDK now supports streaming for any model that has streaming capabilities, enabling lower-latency solutions and enhancing the user experience.
Reduced Latency
For customer-facing applications, the Liminal SDK delivers responses from the LLM as they come in, rather than waiting for the entire response, reducing latency and improving user satisfaction.
Consistent Experience with Rehydration
Where appropriate, streamed responses automatically rehydrate with cleansed data from the prompt, ensuring a consistent and secure user experience.
Get Started With Secure Generative AI Today
At Liminal, we continually push to provide you with the best tools and features that empower your organization to reap the benefits of generative AI while protecting your sensitive company, customer, and employee data. These latest updates are designed to make your experience smoother, more secure, and incredibly efficient.
To learn more about these newest features, or to get a free demo of the full Liminal Platform, click here.