![]() ![]() Also post failure cases in Issue Section to help improve future models. User: Tell us how EasyOCR benefits you/your organization to encourage further development. There is a list of possible bug/improvement issues tagged with 'PR WELCOME'. For bigger ones, discuss with us by opening an issue first. Let's advance humanity together by making AI available to everyone!Ĭoder: Please send a PR for small bugs/improvements. (Thanks a good read about CTC from distill.pub here. (Thanks synthesis is based on TextRecognitionDataGenerator. (Thanks from This repository is a gem that deserves more recognition.īeam search code is based on this repository and his blog. The training pipeline for recognition execution is a modified version of the deep-text-recognition-benchmark framework. It is composed of 3 main components: feature extraction (we are currently using Resnet) and VGG, sequence labeling ( LSTM) and decoding ( CTC). Training script is provided by recognition model is a CRNN ( paper). ❤️ĭetection execution uses the CRAFT algorithm from this official repository and their paper (Thanks from We also use their pretrained model. This project is based on research and code from several papers and open-source repositories.Īll deep learning execution is based on Pytorch. Grey slots are placeholders for changeable light blue modules. (well, we believe most geniuses want their work to create a positive impact as fast/big as possible) The pipeline should be something like the below diagram. We just want to make their works quickly accessible to the public. There are a lot of geniuses trying to make better detection/recognition models, but we are not trying to be geniuses here. The idea is to be able to plug in any state-of-the-art model into EasyOCR. Reader(, detection = 'DB', recognition = 'Transformer') You can also set detail=0 for simpler output. It takes some time but it needs to be run only once. Note 3: The line reader = easyocr.Reader() is for loading a model into memory. Note 2: Instead of the filepath chinese.jpg, you can also pass an OpenCV image object (numpy array) or an image file as bytes. Several languages at once but not all languages can be used together.Įnglish is compatible with every language and languages that share common characters are usually compatible with each other. Note 1: is the list of languages you want to read. Second-generation models: multiple times smaller size, multiple times faster inference, additional characters and comparable accuracy to the first generation models.ĮasyOCR will choose the latest model by default but you can also specify which model to use by passing recog_network argument when creating a Reader instance.įor example, reader = easyocr.Reader(, recog_network='latin_g1') will use the 1st generation Latin model.Add x_ths and y_ths to control merging behavior when paragraph=True.Update argument setting for command line.Add support for PIL image (thanks Add Tajik language (tjk).Faster greedy decoder (thanks Fix bug when a text box's aspect ratio is disproportional (thanks iQuartic for bug report).Output in dictionary format (thanks see PR). ![]() Vertical text support (thanks This is for rotated text, not to be confused with vertical Chinese or Japanese text.Batched image inference for GPUs (thanks see PR).Instructions on training/using custom recognition models.Extend rotation_info argument to support all possible angles (thanks abde0103, see PR).Add readtextlang method (thanks see PR).Update dependencies (opencv and pillow issues).Add trainer for CRAFT detection model (thanks see PR).It can be used by initializing like this reader = easyocr.Reader(, detect_network = 'dbnet18'). Restructure code to support alternative text detectors.This model is a new default for Cyrillic script. DBnet will only be compiled when users initialize DBnet detector.Integrated into Huggingface Spaces □ using Gradio. Ready-to-use OCR with 80+ supported languages and all popular writing scripts including: Latin, Chinese, Arabic, Devanagari, Cyrillic, etc.
0 Comments
Leave a Reply. |