Lenovo has launched an entry-level AI inferencing server designed to make edge AI accessible and affordable for SMBs and enterprises. Showcased as part of a full-stack of cost-effective, sustainable, and scalable hybrid AI solutions at MWC25, the Lenovo ThinkEdge SE100 bridges the gap between client and edge servers to deliver enterprise-level intelligence for every business and is a critical link to enabling AI capabilities for all environments.
Rather than sending the data to the cloud to be processed, edge computing uses devices located at the data source, reducing latency so businesses can make faster decisions and solve challenges in real time. By 2030, the edge market is projected to grow at an annual rate of nearly 37%.
Meeting customers where they are, the new AI inferencing server redefines edge AI with powerful, scalable, and versatile performance that accelerates ROI. Despite a three-fold spending increase, recent global IDC research commissioned by Lenovo reveals that ROI remains the greatest AI adoption barrier. Adaptable for desktops, wall mounts, ceilings, and 1U racks, the SE100 is engineered to be uniquely affordable, breaking price barriers with AI inferencing performance that supports better business outcomes while lowering costs.
As a compact AI-ready edge solution, the Lenovo ThinkEdge SE100 is perfect for constrained spaces without compromising performance and answers customer demand for accelerated AI-level compute that can go anywhere. The breakthrough server delivers enterprise-level, whisper quiet performance with enhanced security features in a design that is GPU-ready for AI workloads, like real-time inferencing, video analytics and object detection across telco, retail, industrial and manufacturing environments.
The ThinkEdge SE100 maximises performance in an AI-ready server, evolving from a base device to an industrial grade, GPU-optimized solution as the end user sees fit. The ThinkEdge SE100 can be equipped with up to six or eight performance cores, ensuring the device’s power is maximised in a smaller footprint.
Across industries, the inferencing server is designed for low latency and workload consolidation, supporting hybrid cloud deployments and machine learning for AI tasks like object detection and text recognition, all while maintaining data security. The ThinkEdge SE100’s rugged design includes dust filtering and tamper protection, to give peace of mind for utilization in real-world environments. Robust security controls, including USB port disabling and disk encryption, serve to protect sensitive data from outside threats.
PCR Tech and IT retail, distribution and vendor news