How to install Tolling Vision
This page is designed to help you install our software,...
This guide will show you how to generate and use client code in various programming languages, enabling you to integrate our application seamlessly into your system. Refer to the example code provided to see how you can interact with our application using the language of your choice.
gRPC is a high-performance, open-source remote procedure call (RPC) framework that can run in any environment. It is designed for efficient communication between services, using HTTP/2 for transport, Protocol Buffers as the interface definition language, and providing features such as authentication, load balancing, and more. gRPC is particularly useful for microservices architectures, where different services need to communicate with each other quickly and reliably. For more information, visit the gRPC Documentation.
Protocol Buffers (Protobuf) is a language-agnostic binary serialization format developed by Google. It is used to define the structure of your data and the methods by which that data can be interacted with. Protobufs are efficient, fast, and ensure that your data is both portable and backwards-compatible. By using Protobuf, you can automatically generate data access classes in various programming languages, streamlining the development process and ensuring consistency across different parts of your system. For more information and examples, visit the Protocol Buffers Documentation.
To interact with our application over gRPC, you need to use our pre-generated clients or generate client code in your preferred programming language. If you do not find your preferred programming language among the clients we have already generated, you can generate your own client following the steps below:
.proto
files which define the gRPC service. These files contain the service definitions and message types used by the service. You can find our .proto
file here.protoc
). This is necessary to compile the .proto
files into client code. Follow the Protobuf Installation Guide for detailed instructions.grpcio-tools
, while for Java, you might use the grpc-java
libraries.
python3 -m grpc_tools.protoc --python_out=OUT_DIR --grpc_python_out=OUT_DIR --proto_path=PATH_TO_PROTO_DIR TollingVisionService.proto
PATH_TO_PROTO_DIR
is the path to the directory where the TollingVisionService.proto
file is located, and OUT_DIR
is the target directory where the generated gRPC client will be placed.
By following these steps, you can generate and use gRPC clients in any programming language that supports gRPC, ensuring you can interact with our application in the language you are most comfortable with.
pom.xml
:
com.smart-cloud-solutions
tollingvision
1.1.0
build.gradle
file:
implementation 'com.smart-cloud-solutions:tollingvision:1.1.0'
npm install @smart-cloud/tollingvision
pip install tollingvision-scsinfo
Since the gRPC protocol is inherently language-agnostic, the service names and parameter names are consistent across all languages. Therefore, we will demonstrate the usage of services and parameters through a Node.js example only.
@smart-cloud/tollingvision
package installedsearch
MethodSearch for vehicles in an image.
import {
InputImage,
Point,
Region,
SearchRequest,
SearchResponse,
Status,
TollingVisionServiceClient,
} from '@smart-cloud/tollingvision';
/*
Create a client instance.
The only required parameter is the address of the Tolling Vision dockerized service or a load-balancer in front of it in a clustered environment,
in the format PROTOCOL://HOST[:PORT].
Here, HOST is the hostname or IP address and the optional PORT is the exposed TCP port of the application, e.g., http://127.0.0.1:8080.
*/
const client = new TollingVisionServiceClient('http://127.0.0.1:8080');
const searchRequest = new SearchRequest();
// Specify the image bytes (required)
searchRequest.setImage(new InputImage().setData(new Uint8Array([])).setName('IMAGE_NAME'));
// Enable make and model recognition (optional)
searchRequest.setMakeAndModelRecognition(true);
// Specify the location in ISO 3166-2 format (optional)
searchRequest.setLocation('US-CA');
// Specify the search region on the image (optional)
searchRequest.addRegion(new Region().addPoint(new Point(0,0).addPoint(new Point(10,0).addPoint(new Point(10,10).addPoint(new Point(0,10)));
const call = client.search(searchRequest);
call.on('data', (response: SearchResponse) => {
switch (response.getStatus()) {
case Status.QUEUEING:
console.log('Request queued.');
break;
case Status.PROCESSING:
console.log('Request processing.');
break;
case Status.RESULT:
console.log('Search results:', response.getResultList());
break;
}
});
call.on('end', () => {
console.log('Search ended.');
});
call.on('error', (err) => {
console.error('Search error:', err);
});
Specification
SearchRequest Parameters
PlateRecognition
: Enable or disable license plate recognition. Default is true.MakeAndModelRecognition
: Enable or disable make and model recognition. Default is false.SignRecognition
: Enable or disable sign recognition. Default is false.InternationalRecognition
: Enable or disable international license plate recognition. Default is false.Resampling
: Enable or disable server-side image resampling to full HD. Default is true.ResultsWithoutPlateType
: Enable or disable returning results without specific plate type. Default is false.Location
: Specify the location where the image was taken in ISO 3166-2 format to improve search accuracy.Image
: The image bytes to be analyzed.Region
: Specify the regions of the image to narrow down the search. Multiple Regions (which include multiple Points) can be added by importing the necessary classes (Region and Point).MaxSearch
: Maximum number of vehicles to search for in the image. Default is 1, maximum is 5.MaxRotation
: Maximum allowed rotation of the license plate in degrees. Default is 45, maximum is 180.MaxCharacterSize
: Maximum height of license plate characters in pixels. Default is between 20 and 80 pixels. Setting to -1 removes the upper limit, increasing processing time.SearchResponse
The SearchResponse
message includes multiple status updates.
RequestId
: Server-generated identifier for debugging.Node
: Server node name, relevant in a cluster setup.QueueingTime
: Time spent waiting for a free processing thread.RecognitionTime
: Total processing time excluding queueing time.InputImageOrientation
: Orientation of the image based on metadata (EXIF orientation).Result
: List of vehicles detected in the image, each containing:Plate
: Detected license plate.Alternative
: Alternative license plate results (if any).Mmr
: The make, model, and color recognition result.Sign
: Detected signs.Frame
: The bounding box of the vehicle.analyze
MethodThe analyze
method is represented by the EventRequest
and EventResponse
messages. It allows specifying multiple requests for different views of the vehicle, including multiple front, rear, and overview images.
Specification
The EventRequest
message allows specifying multiple SearchRequest
s.
The EventResponse
message includes ongoing PartialResult
s for each request, followed by a summary EventResult:
PartialResult
: Contains the intermediate results for each SearchRequest
.ResultType
: Type of the result (FRONT, REAR, OVERVIEW).ResultIndex
: Index of the result in the particular type.Result
: The search response for the request.Error
: Error response if the request failed.EventResult
: Summary of the most likely results, including:FrontPlate
: Most likely front plate.FrontPlateAlternative
: The alternative front license plates (if any).RearPlate
: Most likely rear plate.RearPlateAlternative
: The alternative rear license plates (if any).Mmr
: Most likely make, model, and color recognition result.MmrAlternative
: The alternative make, model, and color recognition results (if any).Sign
: Most likely signs detected.ProcessingTime
: The processing time in milliseconds.
import {
EventRequest,
EventResponse,
InputImage,
SearchRequest,
} from '@smart-cloud/tollingvision';
const eventRequest = new EventRequest();
const frontRequest = new SearchRequest();
frontRequest.setImage(new InputImage().setData(new Uint8Array([])).setName('FRONT_IMAGE_NAME'));
const rearRequest = new SearchRequest();
rearRequest.setImage(new InputImage().setData(new Uint8Array([])).setName('REAR_IMAGE_NAME'));
const overviewRequest = new SearchRequest();
overviewRequest.setImage(new InputImage().setData(new Uint8Array([])).setName('OVERVIEW_IMAGE_NAME'));
overviewRequest.setMakeAndModelRecognition(true);
eventRequest.addFrontRequest(frontRequest);
eventRequest.addRearRequest(rearRequest);
eventRequest.addOverviewRequest(overviewRequest);
const call = client.analyze(eventRequest);
call.on('data', (response: EventResponse) => {
if (response.hasEventResult()) {
console.log('Event Result:', response.getEventResult()?.toObject());
} else if (response.hasPartialResult()) {
const partialResult = response.getPartialResult();
const resultType = partialResult?.getResultType();
const resultIndex = partialResult?.getResultIndex();
if (partialResult?.hasResult()) {
console.log(`Partial Result [${resultType}] [Index: ${resultIndex}]`, partialResult.getResult()?.toObject());
} else if (partialResult?.hasError()) {
console.error(`Error [${resultType}] [Index: ${resultIndex}]`, partialResult.getError()?.toObject());
}
}
});
call.on('end', () => {
console.log('Analyze ended.');
});
call.on('error', (err) => {
console.error('Analyze error:', err);
});
For usage examples, please visit our GitHub page.
This page is designed to help you install our software,...
For a deeper dive into our projects and to see practical examples, visit our GitHub page. Explore various code samples and see our solutions in action!