We are pleased to share that our collaborative research with NAIST, “Efficient Kernel Mapping and Comprehensive System Evaluation of LLM Acceleration on a CGLA” has been formally accepted to the international journal IEEE Access. You can access the full article for download here.
This work represents the first end-to-end evaluation of Large Language Model (LLM) inference on a non-AI-specialized Coarse-Grained Linear Array (CGLA) accelerator, using the state-of-the-art Qwen3 model family as the benchmark and reinforces the viability of general-purpose CGLA architectures—not just fixed-function ASICs or high-power GPUs—for next-generation LLM inference. It demonstrates that compute efficiency, programmability, and adaptability to changing algorithms can coexist in a reconfigurable architecture.
For LENZO, this is a meaningful milestone in advancing the underlying theory and validation behind our CGLA-based compute vision.
Publication Details
Title: Efficient Kernel Mapping and Comprehensive System Evaluation of LLM Acceleration on a CGLA Journal: IEEE Access DOI:10.1109/ACCESS.2025.3636266
LENZO is excited to announce that we will be exhibiting at the Japanese Job Expo hosted by ChallengeRocket — a global event connecting leading technology companies with top engineering talent across Europe.
As we continue building next-generation AI and blockchain semiconductor technology designed in Japan, we are expanding our engineering team across Poland, Ukraine, and remote positions worldwide.
This event is an opportunity for talented engineers to meet the LENZO team, learn about our work, and explore open roles in our rapidly growing company.
Open Positions
System on Chip (SoC) Engineer
C/C++ Programmer (Blockchain / Mining Software)
Why Join LENZO?
At LENZO, we’re building CGLA-based compute engines for the next era of AI inference and high-efficiency crypto mining — redefining how circuits are designed, optimized, and deployed. Our engineering culture emphasizes:
Cutting-edge semiconductor R&D
Hands-on collaboration between hardware, firmware, and algorithm teams
Speed, autonomy, and global teamwork
A chance to shape the world’s next major compute architecture
Meet Us at the Japanese Job Expo
If you’re an engineer passionate about advanced compute hardware, mining systems, or low-level software, we’d love to meet you.
In this technical blog, Yoshifumi Munakata outlines recent progress made by LENZO's LLM Team in getting LLM's running on CGLA.
The rise of the token-driven digital economy demands a new class of compute architecture—one capable of delivering high performance, ultra-low power consumption, and seamless scalability. At LENZO, we are developing exactly that with CGLA (Coarse-Grained Linear Array), our next-generation compute engine designed to accelerate the AI workloads that power modern applications.
Among the most important of these applications are Transformer-based large language models (LLMs) such as ChatGPT, Gemini, and Llama. These models define today’s AI landscape—and we are proud to share that CGLA now runs full Transformer-based inference.
Running Llama on CGLA
The LENZO LLM Team has successfully brought Llama—one of the world’s most widely adopted open-source LLM families—to run natively on our CGLA architecture.
This achievement demonstrates:
CGLA’s compatibility with mainstream Transformer architectures
CGLA’s ability to execute real-world, high-demand inference workloads
A clear path toward accelerating any Llama.cpp-based model, including Llama, Qwen, DeepSeek, Gemma, and others
Whether users choose a small quantized model or a larger configuration, CGLA handles the full inference pipeline, from prompt processing to token generation.
API Access: OpenAI-Compatible Interface
To make CGLA easy for developers to use, we integrated CGLA with the server functionality of the Llama.cpp ecosystem and exposed it through an OpenAI-compatible API interface.
This means:
You can call CGLA inference using the standard OpenAI Python client.
Developers can run CGLA inference from:
Python applications
Web apps
Terminal scripts
Custom services using the OpenAI API schema
Any tool expecting an OpenAI-style completion/chat endpoint
The experience mirrors existing LLM workflows, but the underlying computation runs entirely on CGLA.
Example: Querying Qwen 1.5B on CGLA through the API
From a browser or Python script, a user sends:
Prompt:“What is a CPU?”
Model:Qwen 1.5B (quantized)
CGLA processes the input, runs inference using the model, and returns the generated response—just like any cloud LLM, but powered by our custom hardware.
Users can freely choose any Llama.cpp-compatible model, including Llama, Gemma, DeepSeek, and others.
Performance of Transformer Inference on CGLA
Inference performance on accelerators is typically measured in:
Input Processing Speed
Token Generation Speed
Across multiple quantized models, CGLA demonstrates strong generation performance, with projected gains as we advance toward our 28 nm and 3 nm ASIC implementations. Where CGLA truly stands out is power efficiency. Compared with an RTX 4090 GPGPU, CGLA delivers:
Up to 44.4× improvement in PDP
Up to 11.5× improvement in EDP
These efficiency gains directly translate to lower operating costs and more sustainable large-scale deployments.
The Road Ahead
With Llama, Qwen, and other open-source LLMs already running on CGLA and accessible through an OpenAI-compatible API, we are now focused on two goals:
1. Becoming a Hugging Face Inference Provider
Allowing anyone worldwide to run Transformer inference on CGLA instantly through the platform.
2. Achieving even higher speed and lower power consumption for CGLA-based LLM inference
Through continued architectural refinements and ASIC development.
We appreciate your support as LENZO builds the future of compute—one that is open, efficient, and optimized for the AI systems shaping the coming decade.
Thank you for following the work of the LENZO LLM Team!