Artificial Intelligence Policy (AIP)

Artificial Intelligence Policy (AIP)

Artificial Intelligence Policy (AIP)

This AIP governs the use of DATALOGIC solutions, systems and products incorporating AI technologies (“DATALOGIC AI”), including generative capabilities, agentic components and machine learning, provided to enterprise customers, partners, resellers and end users (“Customers”), under an agreement with DATALOGIC.

This AIP supplements the agreement between DATALOGIC and the Customer. The Customer warrants that its end users use DATALOGIC AI in compliance with this AIP.

DATALOGIC adopts an approach based on transparency, proportionality and reliability. Nothing in this AIP derogates from DATALOGIC’s obligations as an AI provider.

DATALOGIC may periodically update this AIP due to technological, regulatory or operational developments. Continued use of DATALOGIC AI after publication implies acceptance of the AIP’s revisions.

The Customer shall bear sole responsibility for the use and/or exploitation of DATALOGIC AI, including breaches of the AIP and/or of the pertinent agreements. DATALOGIC disclaims any liability and reserves the right to suspend/limit/revoke access to DATALOGIC AI, terminate agreements and/or involve the competent authorities in case of improper uses and/or possible risks.

Ownership of DATALOGIC AI Systems, Prohibition of Reverse Engineering and Confidentiality


Datalogic AI are protected by relevant and applicable patent and copyright laws, international treaty provisions, and other applicable laws. The provided DATALOGIC AI are the sole and exclusive property of DATALOGIC. The Customer agrees that such systems, including any new applications, all development works and title and all other rights of whatsoever nature relating to DATALOGIC AI , shall remain the sole property of DATALOGIC, unless otherwise agreed in writing between DATALOGIC and the Customer.

The Customer is expressly prohibited from carrying out any reverse engineering, decompilation, disassembly or otherwise attempting to derive the source code, structure or functioning of DATALOGIC AI, save to the extent mandatorily permitted under applicable law. Additional restrictions in use of Datalogic AI and other software are set forth in all End User License Agreements (EULAs) applicable to the products and applications that incorporate or execute Datalogic AI.

DATALOGIC AI shall be deemed “Confidential Information”.

Authorized Uses and Guidelines for DATALOGIC AI


The Customer is permitted to use DATALOGIC AI solely for the purposes expressly defined in the agreement with DATALOGIC (the “Permitted Purposes”). The Customer shall not integrate, embed, redistribute or otherwise use DATALOGIC AI in third-party systems/modules/platforms, unless prior written authorization has been obtained from DATALOGIC.

Where no specific Permitted Purposes are set out in the agreement, the authorized use shall be limited to:

  • Documented business functionalities: chat, search, text processing/classification, computer vision or analytical tools for internal processes.
  • Tools and functionalities made available through partner programs, collaborative networks or D initiatives.
  • Internal tests, trials or proof-of-concepts, aligned with the Customer’s business objectives, with proportionate oversight and responsible use.


For the Permitted Purposes, DATALOGIC shall provide informational materials, operating guides and illustrative documentation (including non-binding guidelines).

Prohibited Uses and Unacceptable Conduct


The Customer and its end users shall not facilitate, permit or enable the use of DATALOGIC AI to generate, support or engage in any content, activity or conduct falling within (or that may reasonably fall within) the following prohibited categories (the “Prohibited Uses”):

  • Fraudulent, misleading content or disinformation: spam, scams, phishing, malware, intentional false statements or misleading communications.
  • Unauthorized access or digital intrusions: attempts to access third parties’ systems/networks/accounts; tracking/localization/monitoring of individuals without a legal basis.
  • Unauthorized disclosure of private information or personal data: sharing sensitive data, images/videos of third parties without a legal basis/consent.
  • Manipulation, disruption or interference with DATALOGIC AI: intentional overloading, excessive and/or automated use not compliant with the Permitted Purposes, bypassing safeguards/security measures.
  • IP or confidentiality infringements: use of copyrighted works, trade secrets or third parties’ proprietary information without a valid legal entitlement.

Prohibited AI Practices and Compliance Requirements


The Customer and its end users shall not, under any circumstances, facilitate, permit or enable the use of DATALOGIC AI for practices prohibited under applicable law (the “Prohibited AI Practices”).

By way of example:

  • Exploitation      of      vulnerable      persons,     undue      influence                            over behavior,        and subliminal/manipulative/coercive techniques undermining free will.
  • Social scoring systems or mechanisms enabling discriminatory classifications and/or generalized assessments of individuals.
  • Individual    predictive    policing    based    on    behavioral                      profiling,      sensitive   data       or arbitrary/discriminatory processing.
  • Indiscriminate large-scale scraping of facial images or mass collection of biometric data.
  • Emotion recognition in workplace/educational contexts that infringe fundamental rights.
  • Biometric categorization aimed at inferring sensitive attributes (ethnicity, political opinions, religious/philosophical beliefs, sexual orientation, etc.).
  • Real-time remote biometric identification in publicly accessible spaces, including activities carried out for law enforcement agencies/public authorities.
  • Uses resulting in discriminatory outcomes, unlawful algorithmic bias or unjustified differentiated treatment (including unintentional).


Such activities shall collectively constitute the “Prohibited AI Practices”.

Restrictions on High-Risk AI Uses and Customer Responsibility


The uses listed below fall within categories of artificial intelligence usage deemed critical or sensitive (“High-Risk AI”) under applicable law. DATALOGIC AI is not designed, developed or intended to support or enable such purposes and shall not be used by the Customer in these contexts.

Use of DATALOGIC AI is expressly prohibited for:

  • Safety components of products/systems subject to certification by notified bodies and/or third parties.
  • Remote biometric identification (real-time or post-event).
  • Critical infrastructures (digital management, road traffic, water, gas, energy, etc.).
  • Educational/training/assessment purposes (assignment of outcomes to students/learners).
  • Autonomous decision-making in the employment context (recruitment, workforce management, performance evaluations, access to work/gig platforms).
  • Access to essential public/private services or social/economic benefits.
  • Typical activities of public authorities (law enforcement, migration, justice, democratic processes).
  • Political campaigns, electoral communications or influence over voting processes.
  • Legal interpretation, binding advice or decisions in legal/quasi-judicial contexts.
  • Healthcare determination, diagnosis or clinical decision-making.
  • Automated financial decisions (credit scoring, eligibility, risk assessment with legal effects).
  • Access to employment and/or housing opportunities.

Customer Responsibility for Prohibited or High-Risk Uses and Governance Obligations

Any use of DATALOGIC AI in prohibited practices or in high-risk contexts shall be solely at the Customer’s risk. DATALOGIC reserves the right to suspend, limit or revoke access to its DATALOGIC AI , terminate the agreement with the Customer, and/or report unlawful or suspicious conduct to the competent authorities, in case of improper, unauthorized or non-compliant use identified directly or indirectly. In line with best practices, DATALOGIC may adopt general technical and/or organizational measures to prevent improper uses, without any obligation of active monitoring.

The Customer is solely responsible for the proper use of DATALOGIC AI and shall ensure that the use cases do not turn it into a high-risk system.

The Customer’s exclusive obligations include:

  • Providing transparent information to end users, training for responsible use and notices/warnings with clear instructions.
  • Ensuring full compliance (data protection, IP, product safety, etc.).
  • Using only lawful, verified, up-to-date and suitable data/inputs, subject to proportionate controls.
  • Limiting use to the Permitted Purposes, aligned with legitimate business interests (fairness, proportionality).
  • Verifying the accuracy of outputs and subjecting decisions to human oversight.
  • Implementing human oversight where required by law or where necessary.
  • Performing risk assessment and/or DPIA where DATALOGIC AI processes personal and/or sensitive data.
  • Avoiding deceptive, unfair or misleading practices.
  • Informing users of AI-generated content and obtaining all legally required consents.
  • Promptly reporting to DATALOGIC any improper uses, incidents, breaches or risks.
  • Maintaining documentation required by law and/or, in any event, an internal policy.


DATALOGIC may provide non-binding informational materials to support the understanding of the functionalities.

Consequences of AIP Breaches and DATALOGIC Enforcement Rights


At its sole discretion (with no obligation), DATALOGIC may provide prior notice and an opportunity to remedy, taking into account the seriousness and/or urgency of the circumstances.

DATALOGIC shall act immediately without prior notice where it identifies and/or suspects:

  • improper/abusive/unauthorized uses of DATALOGIC AI;
  • breaches of law or legal/reputational/security risks for DATALOGIC; or
  • uses not compliant with the agreement and/or the Permitted Purposes.


Such measures shall not imply any obligation of active monitoring, systematic review, or continuous oversight. DATALOGIC shall have no liability towards third parties for the Customer’s non-compliant conduct.