Vendors & Technology Landscape

Which vendors are state-of-the-art (face, fingerprint, iris) for national ID systems (Global South)?

Some key players:

  • NEC Corporation: Face recognition top in NIST 1:N. ([NEC Global][5])
  • IDEMIA: Strong track record in fingerprint, iris, fairness evaluations. ([ID Tech][6])
  • Innovatrics: Good face recognition performance, emerging global footprint. ([Innovatrics][7])
  • TECH5 Group: Fingerprint and iris matching algorithm vendor claiming high speed/accuracy in PFT/IREX. ([tech5.ai][8])

Differentiators

  • Algorithm accuracy (matching performance + template size + speed)
  • Robustness under difficult capture conditions (low light, environmental damage, rural enrolment)
  • Scalability (gallery size, throughput)
  • Operational maturity (field deployments, national ID experience)
  • Ecosystem support (hardware, capture devices, sensor quality, quality assurance)
  • Cost / cost of ownership (important especially for Global South)

Vendors specialising in low-resource / challenging environments

  • Some vendors and capture hardware firms focus on rugged sensors, offline/low-connectivity enrolment, dust/humidity tolerant devices.
  • For example, TECH5 claims its algorithms are used in large-scale ABIS in “digital identity management applications … already available for national ID-scale projects”. ([tech5.ai][8])
  • Also, capture devices listed in the MOSIP marketplace (you referenced) often target resource-constrained settings — a useful resource for your work.

Vendor-lock-in, open-source alternatives

  • Many national ID systems use commercial ABIS solutions; vendor-lock-in is a real concern (proprietary matching engines, templates, index structures).

  • Performance of open-source vs commercial: open-source may achieve good results but often lacks the full integration, tuning, scalability, security certifications of commercial ABIS. For your museum/consulting audience it’s worth noting that “commodity” algorithms exist but full ABIS system performance/integration is a different challenge.

Algorithms & Technological Advancements

1:1 (verification) vs 1:N (identification)

  • Verification (1:1): comparing a probe to one claimed identity – threshold setting is common.
  • Identification (1:N): searching a probe against a large gallery — requires indexing/fusion, speed/throughput, and very low false positive risk.
  • Algorithms today increasingly leverage deep learning embeddings (especially for face/iris) rather than classic minutiae or hand-crafted features.
  • For fingerprint: min­tuae + ridge‐flow + deep features; for iris: texture + deep features; for face: embedding + deep CNNs with pose / age / lighting compensation.
  • Advancements: multi-modal fusion (face+iris, face+fingerprint), contactless fingerprint (mobile/optical), better template compression, faster search/indexing, adversary/spoof detection.

Are these algorithms commodities?

  • To an extent yes: many vendors supply high-accuracy matching engines for standard modalities; but the real differentiators are integration (capture, workflow, quality assurance, environmental robustness), scaling to tens of millions, and optimisation for the target population.
  • So for ABIS you cannot treat matching algorithm alone — the system (sensor + capture + template + index + workflow + field conditions) is what matters.

Real-World Deployed Accuracy vs Lab / NIST

How do real-world national ID system accuracy numbers differ?

  • Deployed systems often perform worse than lab/benchmark figures. Reasons: less controlled capture (lighting, subject cooperation, dirty or damaged fingers), variable sensor quality, age/illness/injury effects, environmental conditions (dust, humidity), low connectivity (delays, retries), large population scale, diversity of demographics.
  • There are some published studies of “lab vs field” performance gaps (for example finger/iris in large scale deployments) but in many national ID cases the results are internal to the operator and not publicly disclosed.
  • For example, the iris recognition paper “Empirical Assessment of End-to-End Iris Recognition System Capacity” (2023) shows that system capacity (gallery size, quality) affects performance in identification mode. ([arXiv][9])
  • Important factors contributing to the gap: subject ageing, template ageing (time since enrolment), capture device mismatch (enrolment vs probe), enrolment population heterogeneity, demographic diversity, multi-finger/eye fusion, re-enrolments, environmental/operational conditions.

Limitations of NIST evaluations in national-ID context

  • NIST evaluations often use benchmark datasets that may not reflect the diversity (skin tones, age, damaged fingers, environmental conditions) of a Global South national ID population.
  • Gallery sizes and operational conditions differ: NIST may test millions of identities, but real national ID may test tens of millions under much more heterogenous conditions (rural, low infrastructure).
  • Metrics reported (e.g., FNIR @ FPIR) do not always map directly to ID system success (deduplication, on-boarding, de-dup, multi-modal fusion) where false match consequences differ.
  • Dataset imbalance: If Asian/African skin tones or iris pigmentation/phenotypes are under-represented, then the published accuracy may not accurately reflect deployed populations in developing countries.

Bias & Inclusivity

Demographic/racial/gender bias in biometrics

  • Some biometric modalities are more affected by demographic bias. For example face recognition has documented biases by skin tone, gender, age. NIST has face mask / demographic effects evaluation tracks. ([NIST][4])
  • Fingerprint systems can struggle with worn-fingers (older subjects, manual workers), contact issues, environmental damage; such effects may correlate with socio-economic status, age, gender.
  • Iris recognition tends to be more stable over time (good for adults) but can face capture issues (eyelids, glasses, infants/children).
  • Impact on national ID: If certain demographic groups (women, older adults, rural populations) have higher error rates, this can lead to inclusion/exclusion risks, higher false rejections, disenfranchisement.

Vendor / dataset practices & best practices

  • Many top vendors now emphasise fairness-aware algorithm design, balanced datasets, demographic effect reporting. For example IDEMIA emphasises accuracy and fairness in their benchmarking commentary. ([IDEMIA][10])

  • Best practices for addressing bias:

    • Ensure capture devices/w workflows work equally well for all demographic groups
    • Conduct independent accuracy/fairness audits (especially on local populations)
    • Monitor error rates by sub-group (gender, age, ethnicity) in deployment
    • Use multimodal fusion if one modality is weak for a sub-group
    • Offer fallback/re-enrolment options, manual operator review, inclusive design (e.g., children, those without clear fingerprints)

Country Requirements & Real-World National ID Use

What do countries specify in RFPs?

  • Many national ID RFPs specify biometric quality thresholds (e.g., image resolution, signature of sensor, capture angle/illumination) and also matching accuracy metrics (deduplication false match/false non-match rates).
  • For example typical requirements: false match rate for de-duplication ≤ 1 in 100 000, false non-match rate ≤ some small percentage; template size ≤ X KB; capture devices compliant with ISO/IEC-19794 standards; sensors certified.
  • Implementation note: you should review recent RFPs in Africa/Asia to capture specific language (you may want me to pull a few examples if helpful).

Do countries use state-of-the-art ABIS? Field failure rates & practical limitations

  • Yes, many countries aim to procure state-of-the-art systems, but field conditions (rural enrolment, poor lighting, low training of operators, connectivity outage) cause deviations.
  • Practical failure rates in the field may be higher than lab numbers: e.g., higher failure to enrol rates, increased manual override, higher drop-out, higher re-enrolments, higher false rejects/false accepts.
  • Example: the iris capacity paper suggests the “constrained capacity” (how many identities a system can support before performance degrades) is a real effect. ([arXiv][9])
  • For your museum or consultancy context this means: emphasise that “lab performance is a floor, not a guarantee” and that national programmes should plan for deployment loss-factors, continuous monitoring, periodic re-enrolment, sensor replacement, operator training.

Privacy concerns

  • Yes, countries are increasingly aware of privacy and ethical risks: biometric data protection, consent, storage security, data minimisation, purpose limitation.
  • Some national ID programmes include legislative frameworks for biometric data, reuse restrictions, retention periods, etc.
  • Also architectural choices: decentralised storage, selective access, template encryption, user-controlled biometrics. All relevant for your climate-adaptation / ID museum context.

On-Device Biometrics vs Large-Scale ABIS

Differences

  • On-device (e.g., smartphone fingerprint/face unlock) uses much smaller gallery (one user), simpler sensor (capacitive, optical), often single-factor, integrated into consumer OS.
  • ABIS for national ID uses: large gallery (millions to hundreds of millions), high throughput for enrolment/search/deduplication, high security & fraud resilience, rigid operational workflows, multi-modal capture, strict template/interchange standards, high availability.
  • Sensor types differ: On-device may use capacitive fingerprint, 3D face depth sensor, smartphone camera; ABIS capture may use industrial-grade optical fingerprint, dedicated iris cameras, enrolment kiosks or mobile enrolment units.
  • On-device constraints: cost/power/sensor size; ABIS constraints: scalability, reliability in field, maintenance, connectivity, data protection, lifecycle management.

Decentralisation possibilities

  • Yes, I foresee further decentralisation: mobile enrolment units, edge processing, offline sensor capture, local fusion before cloud, multimodal mobile/apps for remote enrolment, self-sovereign identity (SSI) + biometrics.
  • But for national ID, there remains a need for central deduplication, auditability, high-performance indexing—so decentralisation does not mean “no central system” in most cases.

Proposed Table (with caveats)

Modality 1:N Accuracy (FNIR @ FPIR) 1:1 Accuracy (FNMR @ FMR) Cost Range (device capture) Payload Size Strengths Limitations
Fingerprint ~ 0.18% @ FPIR=0.1% * [ideal] ≤ 2% @ FMR ≤1/10 000 * ~$50-100 ~ 0.3-1.5 KB per finger High maturity, multi-finger fusion Worn fingers, hygiene/contact issues, environment
Face ~ 3% @ FPIR=0.1% * [ideal] ~ 0.1% @ FMR ≤1/100 000 * <$1 (smartphone camera) ~ 15-150 KB Cheap, remote auth, fast algorithmic advances Demographic bias, pose/twins/lighting/PAs
Iris ~ 0.67% @ FPIR=0.1% * [ideal] ~ 0.57% @ FMR ≤1/100 000 * ~$100-300 ~ 0.5-2 KB per iris High stability over time, very high accuracy Cost, user discomfort/cultural, some capture issues
  • Footnote: based on best-in class lab/benchmark data, actual field performance may be worse.

Slide 13 — Implications for ABIS / Global South Deployments

  • When designing for national ID in challenging environments (rural, low connectivity, large populations), assume a performance buffer (e.g., worse than lab by factor 2-5×) and plan accordingly (manual review workflows, fallback capture, re-enrolment).
  • Sensor investment matters: Capture hardware, operator training, environment control (lighting, finger cleaning, enrolment logistics) often drive more error than the algorithm itself.
  • Data quality & template management are critical: Use ISO/IEC 19794, ISO/IEC 30107 (attack detection), ISO/IEC 42006 (AI in biometric systems) etc. Your interest in ISO/IEC 42006:2025 gives you a strong edge here.
  • Multi-modal fusion helps if one modality is weak in your environment (e.g., wet/dirt fingers, children with weak fingerprints).
  • Monitoring, auditing for bias is essential — especially in diverse populations.
  • Privacy, governance, storage, consent need to be built in from day one — especially in Global South population ID systems where trust is critical.
  • On-device enrolment and decentralised capture may reduce cost and improve coverage, but you still need central deduplication and ABIS search capability for large populations.

Slide 14 — Suggested Actions for Your Museum / Consultancy

  • Use your table as a “state-of-the-art” benchmark but include ideal conditions and field-realistic columns for deployed systems.
  • Build a visual diagram (you like mermaid.js) showing the gap between lab performance and field deployment performance (e.g., funnel from sensor → enrolment → algorithm → search → field error).
  • Include case-study slides of deployments in Global South (e.g., India, Africa) and their reported error/failure rates (search for “national ID biometric error rate” for examples).
  • Include a checklist for RFPs: accuracy thresholds, capture device standards, template size, algorithm benchmark results (NIST/independent), environmental/operational conditions, demographic fairness metrics.
  • Add a slide on “future trends” (mobile enrolment, edge matching, AI-based spoof detection, multimodal fusion, decentralised credentials) referencing your on-device vs ABIS discussion.

Copy of Deepfakes

By Ted Dunstone

Copy of Deepfakes

  • 148