Ollama-OCR for High-Precision OCR with Ollama
Llama 3.2-Vision is a multimodal large language model available in 11B and 90B sizes, capable of processing both text and image inputs to generate text outputs. The model excels in visual recognition, image reasoning, image description, and answering image-related questions, outperforming existing open-source and closed-source multimodal models across multiple industry benchmarks.
Llama 3.2-Vision Examples
Handwriting
Optical Character Recognition (OCR)
In this article I will describe how to call the Llama 3.2-Vision 11B modeling service run by Ollama and implement image text recognition (OCR) functionality using Ollama-OCR.
Features of Ollama-OCR
? High accuracy text recognition using Llama 3.2-Vision model
? Preserves original text formatting and structure
?️ Supports multiple image formats: JPG, JPEG, PNG
⚡️ Customizable recognition prompts and models
? Markdown output format option
? Robust error handling
Installing Ollama
Before you can start using Llama 3.2-Vision, you need to install Ollama, a platform that supports running multimodal models locally. Follow the steps below to install it:
- Download Ollama: Visit the official Ollama website to download the installation package for your operating system.
- Install Ollama: Follow the prompts to complete the installation according to the downloaded installation package.
Install Llama 3.2-Vision 11B
After installing Ollama, you can install the Llama 3.2-Vision 11B model with the following command:
ollama run llama3.2-vision
How to use Ollama-OCR
npm install ollama-ocr # or using pnpm pnpm add ollama-ocr
OCR
Code
import { ollamaOCR, DEFAULT_OCR_SYSTEM_PROMPT } from "ollama-ocr"; async function runOCR() { const text = await ollamaOCR({ filePath: "./handwriting.jpg", systemPrompt: DEFAULT_OCR_SYSTEM_PROMPT, }); console.log(text); }
Input Image:
Output:
The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of instruction-tuned image reasoning generative models in 118 and 908 sizes (text images in / text out). The Llama 3.2-Vision instruction-tuned models are optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The models outperform many of the available open source and closed multimodal models on common industry benchmarks.
2. Markdown Output
import { ollamaOCR, DEFAULT_MARKDOWN_SYSTEM_PROMPT } from "ollama-ocr"; async function runOCR() { const text = await ollamaOCR({ filePath: "./trader-joes-receipt.jpg", systemPrompt: DEFAULT_MARKDOWN_SYSTEM_PROMPT, }); console.log(text); }
Input Image:
Output:
ollama-ocr is using a local vision model, if you want to use the online Llama 3.2-Vision model, try the llama-ocr library.
The above is the detailed content of Ollama-OCR for High-Precision OCR with Ollama. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Frequently Asked Questions and Solutions for Front-end Thermal Paper Ticket Printing In Front-end Development, Ticket Printing is a common requirement. However, many developers are implementing...

JavaScript is the cornerstone of modern web development, and its main functions include event-driven programming, dynamic content generation and asynchronous programming. 1) Event-driven programming allows web pages to change dynamically according to user operations. 2) Dynamic content generation allows page content to be adjusted according to conditions. 3) Asynchronous programming ensures that the user interface is not blocked. JavaScript is widely used in web interaction, single-page application and server-side development, greatly improving the flexibility of user experience and cross-platform development.

There is no absolute salary for Python and JavaScript developers, depending on skills and industry needs. 1. Python may be paid more in data science and machine learning. 2. JavaScript has great demand in front-end and full-stack development, and its salary is also considerable. 3. Influencing factors include experience, geographical location, company size and specific skills.

How to merge array elements with the same ID into one object in JavaScript? When processing data, we often encounter the need to have the same ID...

Learning JavaScript is not difficult, but it is challenging. 1) Understand basic concepts such as variables, data types, functions, etc. 2) Master asynchronous programming and implement it through event loops. 3) Use DOM operations and Promise to handle asynchronous requests. 4) Avoid common mistakes and use debugging techniques. 5) Optimize performance and follow best practices.

Discussion on the realization of parallax scrolling and element animation effects in this article will explore how to achieve similar to Shiseido official website (https://www.shiseido.co.jp/sb/wonderland/)...

In-depth discussion of the root causes of the difference in console.log output. This article will analyze the differences in the output results of console.log function in a piece of code and explain the reasons behind it. �...

The latest trends in JavaScript include the rise of TypeScript, the popularity of modern frameworks and libraries, and the application of WebAssembly. Future prospects cover more powerful type systems, the development of server-side JavaScript, the expansion of artificial intelligence and machine learning, and the potential of IoT and edge computing.
