這是用戶在 2025-7-28 13:51 為 https://app.immersivetranslate.com/pdf-pro/8d8e665b-9fa6-4e45-95a0-0ad80818bd54/ 保存的雙語快照頁面,由 沉浸式翻譯 提供雙語支持。了解如何保存?

Information technology - Extensible biometric data interchange formats -
資訊科技 - 可擴展生物特徵資料交換格式 -

Part 5:  第 5 部分:Face image data  臉部影像資料

© ISO/IEC 2019
All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting on the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address below or ISO’s member body in the country of the requester.
保留所有權利。除非另有規定,或因實施所需,未經事先書面授權,本出版品之任何部分不得以任何形式或方式(包括電子或機械方式,如影印、於網際網路或內部網路發布等)進行複製或利用。如需授權,可向以下地址之 ISO 或請求者所在國之 ISO 會員機構提出申請。
ISO copyright office  ISO 著作權辦公室
CP 401 - Ch. de Blandonnet 8
CH-1214 Vernier, Geneva  CH-1214 日內瓦韋爾涅
Phone: +41 227490111  電話:+41 227490111
Fax: +41 227490947  傳真:+41 227490947
Email: copyright@iso.org
電子郵件:copyright@iso.org

Website: www.iso.org  網站:www.iso.org
Published in Switzerland
瑞士出版

Contents  目錄

Page  頁面
Foreword … vi  前言 … vi
Introduction … vii  簡介 … vii
1 Scope … 1
1 範圍 … 1

2 Normative references … 1
2 規範性引用文件 … 1

3 Terms and definitions … 2
3 術語與定義 … 2

4 Abbreviated terms … 7
4 縮寫詞 … 7

5 Conformance … 8
5 符合性 … 8

6 Modality specific information … 9
6 模態特定資訊 … 9

7 Abstract data elements … 9
7 抽象資料元素 ... 9

7.1 Overview … 9
7.1 概述 ... 9

7.1.1 Content and notation … 9
7.1.1 內容與標記法 ... 9

7.1.2 Structure overview … 10
7.1.2 結構概述 ... 10

7.1.3 Data conventions … 11
7.1.3 資料慣例… 11

7.2 Face image data block … 11
7.2 臉部影像資料區塊… 11

7.3 Version block … 11
7.3 版本區塊… 11

7.4 Representation block … 12
7.4 表示法區塊… 12

7.5 Representation identifier … 12
7.5 表示識別碼… 12

7.6 Capture date/time block … 12
7.6 擷取日期/時間區塊… 12

7.7 Quality blocks … 12
7.7 品質區塊… 12

7.8 PAD data block … 12
7.8 PAD 資料區塊… 12

7.9 Session identifier … 12
7.9 工作階段識別碼 … 12

7.10 Derived from … 13
7.10 衍生來源 … 13

7.11 Capture device block … 13
7.11 擷取裝置區塊 … 13

7.12 Model identifier block … 13
7.12 型號識別碼區塊 … 13

7.13 Certification identifier blocks … 13
7.13 認證識別碼區塊…13

7.14 Identity metadata block … 13
7.14 身份詮釋資料區塊…13

7.15 Gender … 14
7.15 性別…14

7.16 Eye colour … 14
7.16 眼睛顏色…14

7.17 Hair colour … 15
7.17 髮色 … 15

7.18 Subject height … 15
7.18 主體身高 … 15

7.19 Properties block … 16
7.19 屬性區塊 … 16

7.20 Expression block … 16
7.20 表情區塊 … 16

7.21 Pose angle block … 17
7.21 姿態角度區塊 … 17

7.22 Angle data block … 18
7.22 角度資料區塊 … 18

7.23 Angle value … 18
7.23 角度數值 … 18

7.24 Angle uncertainty … 18
7.24 角度不確定度 … 18

7.25 Landmark block … 19
7.25 地標區塊 … 19

7.26 Landmark kind … 19
7.26 地標類型 … 19

7.27 MPEG4 feature point … 19
7.27 MPEG4 特徵點 … 19

7.28 Anthropometric landmark … 22
7.28 人體測量地標 … 22

7.29 Landmark coordinates block … 25
7.29 地標座標區塊 … 25

7.30 Image representation block … 26
7.30 影像呈現區塊 … 26

7.31 2D image representation block … 26
7.31 2D 影像呈現區塊 … 26

7.32 2D representation data … 26
7.32 2D 呈現資料 … 26

7.33 2D capture device block … 26
7.33 2D 擷取裝置區塊… 26

7.34 2D capture device spectral block … 26
7.34 2D 擷取裝置光譜區塊… 26

7.35 2D capture device technology identifier … 27
7.35 2D 擷取裝置技術識別碼… 27

7.36 2D image information block … 27
7.36 2D 影像資訊區塊… 27

7.37 2D face image kind … 27
7.37 2D 臉部影像種類… 27

7.38 Post acquisition processing block … 28
7.38 擷取後處理區塊… 28

7.39 Lossy transformation attempts … 28
7.39 有損轉換嘗試… 28

7.40 Image data format … 29
7.40 影像資料格式… 29

7.41 Camera to subject distance … 31
7.41 相機至被攝物距離 … 31

7.42 Sensor diagonal … 31
7.42 感測器對角線 … 31

7.43 Lens focal length … 32
7.43 鏡頭焦距 … 32

7.44 Image size block … 32
7.44 影像尺寸區塊 … 32

7.45 Width … 32
7.45 寬度…32

7.46 Height … 32
7.46 高度…32

7.47 Image face measurements block … 32
7.47 影像面部測量區塊…32

7.48 Image head width … 33
7.48 影像頭部寬度…33

7.49 Image inter-eye distance … 33
7.49 影像雙眼間距…33

7.50 Image eye-to-mouth distance … 34
7.50 影像眼至嘴距離…34

7.51 Image head length … 34
7.51 影像頭部長度…34

7.52 Image colour space … 35
7.52 影像色彩空間…35

7.53 Reference colour mapping block … 35
7.53 參考色彩映射區塊 … 35

7.54 Reference colour schema … 35
7.54 參考色彩架構 … 35

7.55 Reference colour definition and value block … 35
7.55 參考色彩定義與數值區塊 … 35

7.56 3D shape representation block … 35
7.56 3D 形狀表示區塊 … 35

7.57 3D representation data … 36
7.57 3D 表現資料 … 36

7.58 3D capture device block … 36
7.58 3D 擷取裝置區塊 … 36

7.59 3D modus … 36
7.59 3D 模式 … 36

7.60 3D capture device technology identifier … 36
7.60 3D 擷取裝置技術識別碼 … 36

7.61 3D image information block … 37
7.61 3D 影像資訊區塊 … 37

7.62 3D representation kind block … 37
7.62 3D 呈現種類區塊 … 37

7.63 3D vertex block … 37
7.63 3D 頂點區塊 … 37

7.64 3D vertex information block … 37
7.64 3D 頂點資訊區塊 … 37

7.65 3D vertex coordinate block … 37
7.65 3D 頂點座標區塊 … 37

7.66 3D vertex identifier … 38
7.66 3D 頂點識別碼 … 38

7.67 3D vertex normals block … 38
7.67 3D 頂點法向量區塊 … 38

7.68 3D vertex textures block … 38
7.68 3D 頂點紋理區塊 … 38

7.69 3D error map … 38
7.69 3D 誤差圖… 38

7.70 3D vertex triangle data block … 39
7.70 3D 頂點三角形資料區塊… 39

7.71 3D coordinate system … 39
7.71 3D 座標系統… 39

7.72 3D Cartesian coordinate system … 39
7.72 3D 笛卡爾座標系統… 39

7.73 3D Cartesian scales and offsets block … 40 
7.74 3D face image kind … 41 
7.75 3D physical face measurements block … 41 
7.76 3D physical head width … 41 
7.77 3D physical inter-eye distance … 42 
7.78 3D physical eye-to-mouth distance … 42 
7.79 3D physical head length … 42 
7.80 3D textured image resolution block … 42 
7.81 3D MM shape [X/Y/Z] resolution … 42 
7.82 3D MM texture resolution … 42 
7.83 3D texture acquisition period … 42 
7.84 3D face area scanned block … 43 
7.85 3D texture map block … 43 
7.86 3D texture capture device spectral block … 43 
7.87 3D texture standard illuminant … 44 
7.88 3D texture map data … 44 
8 Encoding … 44 
8.1 Overview … 44 
8.2 Tagged binary encoding … 45 
8.3 XML encoding … 46 
9 Registered BDB format identifiers … 47 
Annex A (normative) Formal specifications … 48 
Annex B (informative) Encoding examples … 80 
Annex C (normative) Conformance testing methodology … 88 
Annex D (normative) Application profiles … 101 
Annex E (informative) Additional technical considerations … 154 
Bibliography … 184 

ISO/IEC 39794-5:2019(E) 

Foreword 

ISO (the International Organization for Standardization) and IEC (the International Electrotechnical Commission) form the specialized system for worldwide standardization. National bodies that are members of ISO or IEC participate in the development of International Standards through technical committees established by the respective organization to deal with particular fields of technical activity. ISO and IEC technical committees collaborate in fields of mutual interest. Other international organizations, governmental and non-governmental, in liaison with ISO and IEC, also take part in the work. 
The procedures used to develop this document and those intended for its further maintenance are described in the ISO/IEC Directives, Part 1. In particular, the different approval criteria needed for the different types of document should be noted. This document was drafted in accordance with the editorial rules of the ISO/IEC Directives, Part 2 (see www.iso.org/directives). 
Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights. ISO and IEC shall not be held responsible for identifying any or all such patent rights. Details of any patent rights identified during the development of the document will be in the Introduction and/or on the ISO list of patent declarations received (see www.iso.org/patents) or the IEC list of patent declarations received (see http://patents.iec.ch). 
Any trade name used in this document is information given for the convenience of users and does not constitute an endorsement. 
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and expressions related to conformity assessment, as well as information about ISO’s adherence to the World Trade Organization (WTO) principles in the Technical Barriers to Trade (TBT) see www.iso.org/ iso/foreword.html. 
This document was prepared by Joint Technical Committee ISO/IEC JTC 1, Information technology, Subcommittee SC 37, Biometrics. 
A list of all parts in the ISO/IEC 39794 series can be found on the ISO website. 
Any feedback or questions on this document should be directed to the user’s national standards body. A complete listing of these bodies can be found at www.iso.org/members.html. 
The purchase of this ISO/IEC document carries a copyright licence for the purchaser to use ISO/IEC copyright in the schemas in the annexes to this document for the purpose of developing, implementing, installing and using software based on those schemas, subject to ISO/IEC licensing conditions set out in the schemas. 

Introduction 

Face images have been used for many decades to verify the identity of individuals. In recent years, digital face images have been used in many applications including human examination as well as computer-automated face recognition. Photographic formats are standardized, e.g., for passports and driver licences. There is also a need for a standard data format for digital face images to enable interoperability. A prominent case where such interoperability is essential is the electronic passport system, where face images are stored for several purposes including Automated Border Control. 
Biometric data interchange formats enable the interoperability of different biometric systems. The first generation of biometric data interchange formats was published between 2005 and 2007 in the first edition of the ISO/IEC 19794 series. From 2011 onwards, the second generation of biometric data interchange formats was published in the second edition of the established parts and the first edition of some new parts of the ISO/IEC 19794 series. In the second generation of biometric data interchange formats, new useful data elements such as data elements related to biometric sample quality were added, the header data structures were harmonized across all parts of the ISO/IEC 19794 series, and XML encoding was been added in addition to the binary encoding. 
In anticipation of the need for additional data elements, and in order to avoid future compatibility issues, the ISO/IEC 39794 series provides standard biometric data interchange formats capable of being extended in a defined way. Extensible specifications in ASN. 1 (Abstract Syntax Notation One) and the distinguished encoding rules (DER) of ASN. 1 form the basis for encoding biometric data in binary tag-length-value formats. XSDs (XML schema definitions) form the basis for encoding biometric data in XML (eXtensible Markup Language). 
This third generation of face image data interchange formats complements ISO/IEC 19794-5:2005 and ISO/IEC 19794-5:2011. The first generation of biometric data interchange formats, which has been adopted, e.g., by ICAO for the biometric data stored in Machine Readable Travel Documents, is expected to be retained in the standards catalogue as long as needed. 
This document is intended to provide a generic face image data format for face recognition applications requiring exchange of face image data. Typical applications are: 
  • automated face biometric verification (one-to-one comparison) and identification (one-to-many comparison), and 
  • human verification of a biometric claim by comparison of data subjects against face images, including examination of face images with sufficient detail. 
In addition to the data format, this document specifies application specific profiles including scene constraints, photographic properties and digital image attributes like image spatial sampling rate, image size, etc. These application profiles are contained in Annex D. 
The structure of the data format in this document is not compatible with the previous generations. However, this new revision addresses, for the first time, a mechanism to maintain future extensions in a backwards- and forwards-compatible manner. This will mean that a parser is able to read data records and understand data items that are formatted according to versions of the standard that are older, the same or newer than the parser is developed to. All newer data items will not disrupt the parsing process and can be ignored. Newer versions of this document will at least include the mandatory data items of the previous standards. 
The 3D encoding types 3D point map and range image are not supported by this edition of this document. 

Information technology - Extensible biometric data interchange formats -  

Part 5:
Face image data 

1 Scope 

This document specifies: 
  • generic extensible data interchange formats for the representation of face image data: A tagged binary data format based on an extensible specification in ASN. 1 and a textual data format based on an XML schema definition that are both capable of holding the same information; 
  • examples of data record contents; 
  • application specific requirements, recommendations, and best practices in data acquisition; and 
  • conformance test assertions and conformance test procedures applicable to this document. 

2 Normative references 

The following documents are referred to in the text in such a way that some or all of their content constitutes requirements of this document. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies. 
ISO/IEC 2382-37, Information technology - Vocabulary - Part 37: Biometrics 
ISO/IEC 8824-1, Information technology - Abstract Syntax Notation One (ASN.1): Specification of basic notation - Part 1 
ISO/IEC 8825-1, Information technology - ASN. 1 encoding rules: Specification of Basic Encoding Rules (BER), Canonical Encoding Rules (CER) and Distinguished Encoding Rules (DER) — Part 1 
ITU-T Rec. T. 81 | ISO/IEC 10918-1, Information technology - Digital compression and coding of continuous-tone still images - Part 1: Requirements and guidelines 
ISO 11664-2:2007, Colorimetry - Part 2: CIE standard illuminants 
ISO/IEC 14496-2:2004, Information technology - Coding of audio-visual objects - Part 2: Visual 
ITU-T Rec. T. 800 | ISO/IEC 15444-1, Information technology - JPEG 2000 image coding system - Part 1: Core coding system 
ISO/IEC 15948, Information technology - Computer graphics and image processing - Portable Network Graphics (PNG): Functional specification 
ISO/IEC 39794-1, Information technology - Extensible biometric data interchange formats - Part 1: Framework 
Doc ICAO 9303: Machine Readable Travel Documents 
W3C Recommendation, XML Schema Part 1: Structures (Second Edition), 28 October 2004, http://www w3.org/TR/xmlschema-1/ 
W3C Recommendation, XML Schema Part 2: Datatypes (Second Edition), 28 October 2004, http://www .w3.org/TR/xmlschema-2/ 

3 Terms and definitions 

For the purposes of this document, the terms and definitions given in ISO/IEC 39794-1, ISO/IEC 2382-37 and the following apply. 
ISO and IEC maintain terminological databases for use in standardization at the following addresses: 
1:1 application case 
biometric verification 
Note 1 to entry: Biometric verification is defined in ISO/IEC 2382-37 as a process of confirming a biometric claim through biometric comparison. 
3.2
1:N application case 
biometric identification 
Note 1 to entry: Biometric identification is defined in ISO/IEC 2382-37 as a process of searching against a biometric enrolment database to find and return the biometric reference identifier(s) attributable to a single individual. 
3.3
2D face image 
two-dimensional face representation that encodes the luminance and/or colour texture of the face of a capture subject in a given lighting environment 
3.4
3D face image 
three-dimensional face representation that encodes a surface in a 3D space 
3.5
3D vertex 
representation using 3D vertices and triangles between these points for coding of a 3D surface 
3.6
RGB 
colour space designed to encompass most of the colours achievable on CMYK colour printers, but by using red, green and blue primary colours on a device such as a computer display 
3.7
anthropometric landmark 
landmark on the face used for identification and classification of humans 
3.8
landmark code 
<anthropometric> two-part code that uniquely defines an anthropometric landmark 
3.9
camera to subject distance 
CSD 
distance between the eyes plane of a capture subject and the and the sensor/image plane of the camera 

3.10

Cartesian coordinate system 

3D orthogonal coordinate system 

3.11

chin 

central forward portion of the lower jaw 

3.12

CIE standard illuminant D65 

commonly used standard illuminant defined by the International Commission on Illumination (CIE) that is intended to represent average daylight and has a correlated colour temperature of approximately 6500 K 
Note 1 to entry: CIE standard illuminant D65 is specified in ISO 11664-2. 

3.13

colour image 

continuous tone image ( 3.16 3.16 _ 3.16 _\underline{3.16} ) that has more than one channel, each of which is coded with one or multiple bits 

3.14

colour space 

way of representing colours of pixels in an image 
EXAMPLE RGB and YUV colour spaces are typically used in this document. 

3.15

common biometric exchange formats framework 

CBEFF 

data format specifically for exchanging biometric data that provides for the encompassing of any biometric type into a standard format 

3.16

continuous tone image 

image whose channels have more than one bit per pixel 

3.17

crop factor 

ratio of the diagonal of the full frame camera ( 43 , 3 mm ) ( 43 , 3 mm ) (43,3mm)(43,3 \mathrm{~mm}) to that of a selected camera’s image sensor 
Note 1 to entry: The determination of an appropriate focal length lens for a field of view equivalent to a full frame camera can be made by considering the crop factor. 

3.18

crown 

top of the head ignoring any hair 

3.19

dots per inch 

DPI 

individual printed dots in a line or column within a span of 25 , 4 mm ( 1 inch ) 25 , 4 mm ( 1 inch ) 25,4mm(1inch)25,4 \mathrm{~mm}(1 \mathrm{inch}) 

3.20

exposure value 

EV 
number that represents a combination of a camera’s shutter speed and f-number, such that all combinations that yield the same exposure have the same description value 

ISO/IEC 39794-5:2019(E)

3.21

eye centre 

centre of the line connecting the inner and the outer corner of the eye 
Note 1 to entry: The eye centres are the feature points 12.1 and 12.2 as defined in ISO/IEC 14496-2:2004, Annex C. 
Note 2 to entry: The inner and the outer corner of the eye are defined by ISO/IEC 14496-2-2:2004: feature points 3.12 and 3.8 for the right eye, and 3.11 and 3.7 for the left eye. 
3.22
eye-to-mouth distance 
EMD 
distance between the face centre and the mouth midpoint 
Note 1 to entry: The mouth midpoint is the feature point 2.3 as defined in ISO/IEC 14496-2:2004, Annex C. 

3.23

eye visibility zone 

EVZ 

zone covering a rectangle having a margin to any part of the visible eyeball 
Note 1 to entry: The margin is defined in D.1.4.3.3. 

3.24

face centre 

M
midpoint of the line connecting the two eye centres 

3.25

face image kind 

category of face images ( 3.27 3.27 _ 3.27 _\underline{3.27} ) that satisfy specific requirements 

Note 1 to entry: Application specific requirements are specified in one of the application profiles in Annex D. 

3.26

facial animation parameter 

FAP 

standard for the virtual representation, which includes visual speech intelligibility, mood and gesture by using feature points 

Note 1 to entry: Visual representation as specified in ISO/IEC 14496-1 and ISO/IEC 14496-2. 

3.27

face image 

electronic image-based representation of the face of a capture subject 

3.28

face portrait 

visual representation of the capture subject, which includes the full-frontal part of the head, including hair in most cases, as well as neck and possibly top of shoulders 

3.29

face texture 

2D sampling face representation that encodes one or a combination of several spectral spatial modulations received by 3D imaging systems of a face in a given lighting system having a 2D coordinate link to the face shape 

3.30

feature point 

reference point in a face image as used by face recognition algorithms 
Note 1 to entry: Commonly referred to as a landmark, an example being the position of the eyes. 
3.31
fish eye 
type of distortion where central objects of the image erroneously appear closer than those at the edge 
3.32
Frankfurt Horizon 
standard plane for orientation of the head defined by a line passing through the right tragion (the front of the ear) and the lowest point of the right eye socket 
Note 1 to entry: The Frankfurt Horizon may be hard to define, as it is related to the ear position that may be covered by hair. 
Note 2 to entry: The Frankfurt Horizon has been defined in the Frankfurt-am-Main (anthropological) Agreement of 1882. 
3.33
greyscale image 
continuous tone image ( 3.16 3.16 _ 3.16 _\underline{3.16} ) encoded with one luminance channel 
Note 1 to entry: If the luminance channel is coded with 8 bits, the greyscale image is also referred to as a monochrome or black and white image. 
3.34
horizontal deviation angle 
HD 
maximal allowed deviation from the horizontal of the imaginary line between the nose of a capture subject and the lens of the camera 
3.35
human examination 
process of human comparison of a face image with an individual or another face image through detailed examination of face characteristics and structures for the purposes of biometric verification or identification 

3.36

human identification 
process of human searching through a list of face images to match against an input image(s) 
Note 1 to entry: Also known as one-to-many (1:N) searching. 
Note 2 to entry: Identification can be performed by human (experts) as well, and human identification may consider more than biometric data. 

3.37

human verification 
process of confirming a specific biometric claim by human comparison of a face image with an individual or another face image 
Note 1 to entry: Also known as one-to-one ( 1 : 1 1 : 1 1:11: 1 ) comparison. 
Note 2 to entry: Verification can be performed by human (experts) as well, and human verification may consider more than biometric data. 

3.38
implementation under test
IUT
Implementation of a technical system currently tested 

3.39
inner region 
pixels of a face image carrying data of the central region of a face 

ISO/IEC 39794-5:2019(E)

3.40
inter-eye distance 
IED 
length of the line connecting the eye centres of the left and right eye 
3.41
issuer 
organization that issues Machine Readable Travel Documents (MRTDs) 
3.42
lower camel-case notation 
naming convention in which compound words are joined together without spaces, where the first letter of the entire word is lowercase, but the first letter of subsequent words is uppercase 

3.43

magnification distortion 
image imperfection where the degree of magnification varies with the distance from the camera and the depth of the face 
3.44
modus 
manner in which a particular property is acquired 
3.45
near infrared 
section of infrared band with wavelength from 780 nm to 3000 nm 
3.46
outer region 
pixels of a face image outside of the inner region 
3.47
photo booth 
automated system for digitally capturing 2D images in either public or office environments 
Note 1 to entry: A photo booth encloses the subject in a highly-controlled lighting environment and consists of a camera, lighting and peripheral devices such as printers. It has entrances on one or both sides with reflective curtains protecting against ambient light. 
3.48
photo kiosk 
semi-automated system for digitally capturing 2D images in an office-environment 
Note 1 to entry: A photo kiosk consists of camera and lighting and usually has a separate panel placed behind the subject to provide the required background but is otherwise open. 
3.49
pixel 
picture element on a two-dimensional array that comprises an image 
3.50
pixel per inch 
PPI 
individual pixels in a line or column of a digital image within a span of 25 , 4 mm ( 1 inch ) 25 , 4 mm ( 1 inch ) 25,4mm(1inch)25,4 \mathrm{~mm}(1 \mathrm{inch}) 
3.51
presentation attack 
presentation of an artefact or human characteristic to the biometric capture subsystem in a fashion that could interfere with the intended policy of the biometric system 
3.52
presentation attack detection 
PAD 
automated determination of a presentation attack 
3.53
radial distortion 
image imperfection where the degree of magnification varies with the distance from the optical axis 
3.54
red eye effect 
red glow from a subject’s eye caused by light from flash reflecting from blood vessels behind the retina 
3.55
subject 
individual who is to be displayed on the face portrait 
Note 1 to entry: If the face portrait is part of a Machine Readable Travel Document (MRTD), this individual is intended to be the holder of the MRTD. 

3.56

upper camel-case notation 
naming convention in which compound words are joined together without spaces and the first letter of every word is uppercase 
3.57
wavelength 
distance between repeating units of a wave pattern 
Note 1 to entry: Commonly designated by the Greek letter lambda ( λ λ lambda\lambda ). 
3.58
white light 
apparently colourless light on human perception 
EXAMPLE Ordinary daylight, standard lights as D50, D65, etc. 
Note 1 to entry: For many purposes it is assumed that white light contains all wavelengths of the visible spectrum at equal intensity based on human perception. Strong deviations from equal intensity usually lead to deviations in the perception of colours. 

4 Abbreviated terms 

For the purposes of this document, the abbreviated terms given in ISO/IEC 39794-1 and the following apply. 
ABC Automated Border Control 
CCD charge-coupled device 
CMOS complementary metal-oxide-semiconductor 
CSD camera to subject distance 
DOVID diffractive optically variable image device 
DPI dots per inch 
EMD eye-to-mouth distance 

ISO/IEC 39794-5:2019(E)

EV exposure value 
EVZ eye visibility zone 
FAP facial animation parameter 
FH Frankfurt Horizon 
HD horizontal deviation angle 
ICAO International Civil Aviation Organization 
IED Inter-eye distance 
JPEG image compression standard specified as ISO/IEC 10918; the JPEG baseline standard was published as ITU-T Rec. T. 81 | ISO/IEC 10918-1 
JPEG2000 image compression standard specified as ISO/IEC 15444; the JPEG2000 baseline standard was published as ITU-T Rec. T. 800 | ISO/IEC 15444-1 
LDS logical data structure as defined in ICAO Doc 9303 
M face centre 
MP intensity measurement pattern side length 
MRTD machine readable travel document, the term also includes electronic MRTD's, electronic machine readable travel document using a contactless integrated circuit 
MTF modulation transfer function 
MTF20 highest spatial frequency where the MTF is 20 % 20 % 20%20 \% or above 
NIR near infrared 
PPCM pixel per centimetre 
PPI pixel per inch 
PNG portable network graphics format specified as ISO/IEC 15948 
RFID radio-frequency identification 
RGB red green blue colour representation 
SFR spatial frequency response 
SNR signal to noise ratio 
sRGB  standard RGB colour space created for use on monitors, printers and the Internet using the ITU-R BT. 709 primaries 
EV exposure value EVZ eye visibility zone FAP facial animation parameter FH Frankfurt Horizon HD horizontal deviation angle ICAO International Civil Aviation Organization IED Inter-eye distance JPEG image compression standard specified as ISO/IEC 10918; the JPEG baseline standard was published as ITU-T Rec. T. 81 | ISO/IEC 10918-1 JPEG2000 image compression standard specified as ISO/IEC 15444; the JPEG2000 baseline standard was published as ITU-T Rec. T. 800 | ISO/IEC 15444-1 LDS logical data structure as defined in ICAO Doc 9303 M face centre MP intensity measurement pattern side length MRTD machine readable travel document, the term also includes electronic MRTD's, electronic machine readable travel document using a contactless integrated circuit MTF modulation transfer function MTF20 highest spatial frequency where the MTF is 20% or above NIR near infrared PPCM pixel per centimetre PPI pixel per inch PNG portable network graphics format specified as ISO/IEC 15948 RFID radio-frequency identification RGB red green blue colour representation SFR spatial frequency response SNR signal to noise ratio sRGB standard RGB colour space created for use on monitors, printers and the Internet using the ITU-R BT. 709 primaries| EV | exposure value | | :--- | :--- | | EVZ | eye visibility zone | | FAP | facial animation parameter | | FH | Frankfurt Horizon | | HD | horizontal deviation angle | | ICAO | International Civil Aviation Organization | | IED | Inter-eye distance | | JPEG | image compression standard specified as ISO/IEC 10918; the JPEG baseline standard was published as ITU-T Rec. T. 81 \| ISO/IEC 10918-1 | | JPEG2000 | image compression standard specified as ISO/IEC 15444; the JPEG2000 baseline standard was published as ITU-T Rec. T. 800 \| ISO/IEC 15444-1 | | LDS | logical data structure as defined in ICAO Doc 9303 | | M | face centre | | MP | intensity measurement pattern side length | | MRTD | machine readable travel document, the term also includes electronic MRTD's, electronic machine readable travel document using a contactless integrated circuit | | MTF | modulation transfer function | | MTF20 | highest spatial frequency where the MTF is $20 \%$ or above | | NIR | near infrared | | PPCM | pixel per centimetre | | PPI | pixel per inch | | PNG | portable network graphics format specified as ISO/IEC 15948 | | RFID | radio-frequency identification | | RGB | red green blue colour representation | | SFR | spatial frequency response | | SNR | signal to noise ratio | | sRGB | standard RGB colour space created for use on monitors, printers and the Internet using the ITU-R BT. 709 primaries |

5 Conformance 

A BDB conforms to this document if it satisfies all the relevant requirements related to: 
  • its data structure, data values and the relationships between its data elements as specified throughout Clauses 7,8 and Annex A, 
  • the relationship between its data values and the input biometric data from which the BDB was generated as specified throughout Clauses 7,8 , and Annex A, and 
  • application profile specific compliance specifications given in Annex C.4. 
A system that produces biometric data records is conformant to this document if all biometric data records that it outputs conform to this document (as defined above) as claimed in the ICS associated with that system. A system does not need to be capable of producing biometric data records that cover all possible aspects of this document, but only those that are claimed to be supported by the system in the ICS. 
A system that uses BDBs is conformant to this document if it can read, and use for the purpose intended by that system, all BDBs that conform to this document (as defined above) as claimed in the ICS associated with that system. A system does not need to be capable of using BDBs that cover all possible aspects of this document, but only those that are claimed to be supported by the system in an ICS. 

6 Modality specific information 

The recorded image data shall appear to be the result of a capture process of a face. For the purpose of describing the position of each pixel within an image to be exchanged, a pair of reference axes shall be used. The origin of the axes, pixel location ( 0,0 ), shall be located at the upper left-hand corner of each image, which corresponds to the upper right-hand side of the forehead from the perspective of the capture subject. The x x xx-coordinate (horizontal) position shall increase positively from the origin to the right side of the image (i.e. left-hand forehead). The y y yy-coordinate (vertical) position shall increase positively from the origin to the bottom of the image. 

7 Abstract data elements 

7.1 Overview 

7.1.1 Content and notation 

This clause describes the contents of data elements defined in this document. These semantic descriptions are independent of the encoding of the data elements. 
The presence of data elements is specified in Annex A. Certain data elements are optional. Such data elements need not be included in a BDB. An optional data element may be omitted altogether from the encoding. 
Application profiles as defined in Annex D may further restrict the presence of data elements. Such profiles may make optional elements mandatory, and they may exclude optional elements. 
In an ASN. 1 module, optional data elements are marked with the keyword OPTIONAL. When such an element is not present, then the tag, length and value octets of this data element are omitted from the tagged binary encoding. 
A data element in an XML schema definition is optional if the value of its min0ccurs attribute is 0 . When such an element is not present, the opening and closing tags as well as the value of this data element are omitted from the XML encoding. 
If all child elements of a data element are optional, this data element shall be marked optional as well. 
Type names are in upper camel-case notation derived from subclause titles in this clause. Element names are in lower camel-case notation derived from these subclause titles. If the generic name starts with a number, then this component is set to the end of the base name. In the XSD, type names will end with the word “Type”. 
EXAMPLE 1 The Image Colour Space element has the encoding name imageColourSpace and the type ImageColourSpace (in ASN.1) and ImageColourSpaceType (in XML). 
EXAMPLE 2 An element value with the abstract name colour coded light has the value colourCodedLight. An element value with the abstract name 48 bit RGB has the encoding value rgb48Bit. 

7.1.2 Structure overview 

The order of the abstract data elements in 7.2 and beyond is derived from traversing the tree in Figure 1 from left to right, depth first. A formal description of the structure is given in Annex A. 1 for ASN. 1 and in Annex A. 2 for the XML encoding of these abstract data elements. 
Key 
elements which can be divided in sub-elements, not shown in this Figure 
Exclusive Or (XOR), one, and only one, option shall be chosen 
denotes that this element is defined in Clause 7.n 
The Figure has been manually generated, its content is informative. The normative structure is given in A. 1 for ASN. 1 and A. 2 for XML. 
Figure 1 - Face image data block 

7.1.3 Data conventions 

For value measurement the following units are used: 
  • physical measurement: millimetres; 
  • image measurement: pixels; 
  • left/right: from perspective of the subject. 
Unless otherwise specified, all other numeric values are unsigned integer quantities. 
The conversion of a numeric value to integer is given by rounding down if the fractional portion is less than 0,5 and rounding up if the fractional value is greater than or equal to 0,5 . 
The absence of an optional element means that the encoder does not provide any statement about the value of the element. 

7.2 Face image data block 

Figure 2 - Example of embedding multiple representations in the same Face image data block 
Abstract values: None 
Contents: Each BDB shall pertain to a single subject and shall contain one or more representations of a human face. Together with the Version block, each BDB can contain one or more geometric representations in Representation blocks. The record structure is depicted in Figure 2. 

7.3 Version block 

Abstract values: The abstract values for the Version block are defined in ISO/IEC 39794-1. 
Contents: The generation number of this document shall be 3. The year shall be the year of the publication of this document. 
If a BDB contains representations encoded using different versions of an extensible biometric data interchange format, then the version number of the most recent version of the encoding versions shall be used. 

7.4 Representation block 

Abstract values: None. 
Contents: A Representation block consists of a unique Representation identifier characterizing this Representation block, an Image representation block, a Capture date/time block, Quality blocks, a PAD data block, a Session identifier, an identifier to define a relationship to another record, called Derived from, a Capture device block, a Identity metadata block describing discernible characteristics of the subject, and the Landmark blocks. The structure of this element is shown in Figure 1. 
Multiple face image representations of the same biometric data subject may be described in the same Face image data block. This is accomplished by including multiple Representation blocks. Face image representations containing 2D data may be combined with face image representations containing 3D data. 
EXAMPLE The structure of a possible storage of Respresentaton blocks containing 2D and 3D data is illustrated in Figure 2. 

7.5 Representation identifier 

Abstract values: Integer. 
Contents: This element shall obtain a unique identifier for the Representation block. Each representation shall have its unique Representation identifier. 
NOTE Unlike other parts of the ISO/IEC 39794 series, this document requires Representation identifiers to link processed data to its original source. 

7.6 Capture date/time block 

Abstract values: See Capture date/time block in ISO/IEC 39794-1. 
Contents: The Capture date/time block shall indicate when the capture of this representation started in Coordinated Universal Time (UTC). 

7.7 Quality blocks 

Abstract values: See Quality blocks in ISO/IEC 39794-1. 
Contents: This element contains information on the biometric sample quality. 

7.8 PAD data block 

Abstract values: See PAD data block in ISO/IEC 39794-1. 
Contents: This element shall convey the mechanism used in biometric presentation attack detection and the results of the presentation attack detection mechanism. 

7.9 Session identifier 

Abstract values: Integer. 
Contents: This element shall map the Representation block to the photo session where the face image was recorded. 

7.10 Derived from 

Abstract values: Integer. 
Contents: This element shall denote interdependencies when multiple representations are stored in a Face image data block. This is of particular interest in the case where post-processing has been used but may be used in case of all other image types, too. The value shall be the Representation identifier number of the original representation. 
To give an example for an application of this specification, assume that there are two Representations in the overall record. Their identifiers are 1 and 2. The first representation has been post-processed and resulted in the second representation. Then, the second representation shall have the Derived from element set to 1 . 

7.11 Capture device block 

Abstract values: See Capture device block in ISO/IEC 39794-1. 
Contents: The Capture device block contains the Model identifier block and the Certification identifier blocks. 

7.12 Model identifier block 

Abstract values: See Model identifier block in ISO/IEC 39794-1. 
Contents: The Model identifier block shall identify the biometric organization that manufactures the product that created the BDB . It shall carry a CBEFF biometric organization identifier (see ISO/IEC 39794-1). Additionally, it shall identify the product type that created the BDB. It shall be assigned by the registered product manufacturer or other approved registration authority (see ISO/IEC 39794-1). 

7.13 Certification identifier blocks 

Abstract values: See Certification identifier blocks in ISO/IEC 39794-1. 
Contents: This document does not contain details of certification schemes. 
NOTE Currently, no certification schemes are available for this document. 

7.14 Identity metadata block 

Abstract values: For the structure see Figure 1. 
Contents: The Identity metadata block is intended to describe properties of the subject pictured in the image. The Identity metadata block consists of the Gender, Eye colour, Hair colour, and Subject height elements, the Properties block, the Expression block, and the Pose angle block. 
If all elements of this element are absent, the element itself shall be absent, too. 

7.15 Gender 

Abstract values: The value of this element shall be one of the following: 
  • unknown; 
  • other; 
  • male; 
  • female. 
Contents: The Gender element shall represent the gender of the subject. 

7.16 Eye colour 

Abstract values: The value of this element shall be one of the following: 
  • unknown; 
  • other; 
  • black; 
  • blue; 
  • brown; 
  • grey; 
  • green; 
  • hazel; 
  • multi-coloured; 
  • pink. 
Contents: The Eye colour element shall represent the colour of the irises of the eyes. If the eyes have different colours, then the colour of the right eye shall be encoded. 

7.17 Hair colour 

Abstract values: The value of this element shall be one of the following. 
Contents: quad-quad\quad-\quad unknown; 
  • other; 
  • bald; 
  • black; 
  • blonde; 
  • brown; 
  • grey; 
  • white; 
  • red; 
  • known coloured (it is known that the hair colour has been changed from the natural one of that capture subject). 
Contents: The Hair colour element shall represent the colour of the hair of the subject. 

7.18 Subject height 

Abstract values: Integer. 
Contents: The Subject height element shall represent the height of the subject in millimetres. The minimum value for this element shall be 1 mm and the maximum value shall be 65535 mm . 
NOTE This value in most cases can only be used as a rough estimate of the subject height. Shoes, age, and even time of the day influence this measure. 

7.19 Properties block 

Abstract values: This element contains one or several of the following elements: 
  • glasses; 
  • moustache; 
  • beard; 
  • teeth visible; 
  • pupil or iris not visible (e.g. either or both eyes closed or half closed); 
  • mouth open; 
  • left eye patch; 
  • right eye patch; 
  • dark glasses (medical); 
  • biometric absent (conditions which could impact landmark detection); 
  • head coverings present (e.g., hats, scarves, toupees). 
Contents: The Properties block indicates which properties are present. There may be restrictions for different Face image kinds (see the profiles in Annex D). Each element may be true, false or absent. False elements do not need to be listed unless those elements are mandatory. 

7.20 Expression block 

Abstract values: This element contains one or several of the following items: 
  • neutral (non-smiling) with both eyes open and mouth closed; 
  • smile; 
  • raised eyebrows; 
  • eyes looking away from the camera; 
  • squinting; 
  • frowning. 
Contents: The Expression block indicates which expressions are shown. Each element may be true, false or absent. False elements do not need to be listed unless those elements are mandatory. Neutral and smile shall not be both true for the same image. 

7.21 Pose angle block 

Key 
1 pitch § 
2 yaw ( Y ) ( Y ) (Y)(\mathrm{Y}) 
3 roll ( R ) 3 roll ( R ) 3roll(R)3 \operatorname{roll}(\mathrm{R})
The three elements together define the pose (Y, P, R). 
Figure 3 - Definition of pose angles with respect to the frontal view of the subject 
Abstract values: The Pose angle block contains Angle blocks for yaw, pitch, and roll. 
Contents: The Pose angle block shall represent the estimated or measured pose of the subject in the image. 
The angles encoded in this element are: 
Yaw angle block (Y): Rotation about the vertical (y) axis. The yaw angle Y is the rotation in degrees about the y y yy-axis (vertical axis) shown in Figure 3. Frontal poses have a yaw angle of 0 0 0^(@)0^{\circ}. Positive angles represent faces looking to their left (a counter-clockwise rotation around the y y yy-axis). 
Pitch angle block ( P ): Rotation about the horizontal side-to-side ( x ) axis. The pitch angle P is the rotation in degrees about the x -axis (horizontal axis) shown in Figure 3. Frontal poses have a pitch angle of 0 0 0^(@)0^{\circ}. Positive angles represent faces looking down (a counter-clockwise rotation around the x-axis). 
Roll angle block ®: Rotation about the horizontal back to front (z) axis. The roll angle R is the rotation in degrees about the z -axis (the horizontal axis from front to back) shown in Figure 3. Frontal poses have a roll angle of 0 0 0^(@)0^{\circ}. Positive angles represent faces tilted toward their right shoulder (counter-clockwise rotation around the z-axis). A roll angle of 0 0 0^(@)0^{\circ} denotes that the left and right eye centres have identical y coordinates. 
The angles are defined relative to the frontal pose of the subject, which has angles ( Y = P = R = 0 ) ( Y = P = R = 0 ) (Y=P=R=0)(\mathrm{Y}=\mathrm{P}=\mathrm{R}=0) as shown in Figure 3. The frontal pose is defined by the Frankfurt Horizon as the xz plane and the vertical symmetry plane as the yz plane with the z axis oriented in the direction of the face sight. Examples are shown in Figure 4. 
As order of the successive rotation around the different axes does matter, the encoded rotation angle shall correspond to an order of execution starting from the frontal view. This order shall be given by roll (about the front axis), then pitch (about the horizontal axis) and finally yaw (about the vertical axis). The (first executed) roll transformation will therefore always be in the image xy plane. 
From the point of view of executing a transformation from the observed view to a frontal view, the transformation order will therefore be in the opposite order: Yaw, pitch, and then roll. The encoded angles are from the frontal view to the observed view. The conversion to integer is specified in 7.1. 
Figure 4 - Examples of pose angles in the form ( Y , P , R Y , P , R Y,P,R\mathrm{Y}, \mathrm{P}, \mathrm{R} ) 

7.22 Angle data block 

Abstract values: Angle value and Angle uncertainty. 
Contents: The Angle data block element contains an Angle value and its corresponding Angle uncertainty. 

7.23 Angle value 

Abstract values: Integer, the minimum value is -180 , the maximum value is 180 . 
Contents: The Angle value is given by Tait-Bryan angles (in degrees). 

7.24 Angle uncertainty 

Abstract values: The minimum value of an Angle uncertainty variable is 0 , the maximum value is 180 . 
Contents: The Angle uncertainty represents the expected degree of uncertainty of the associated pose angle. The more uncertain, the value of the uncertainty shall become larger. The Angle uncertainty allows storing an uncertainty or tolerance value for an angle. The true angle should be in a range of Angle value ± ± +-\pm Angle uncertainty. If the associated pose angle is absent, the Angle uncertainty for this angle shall be absent, too. 

7.25 Landmark block 

Abstract values: None. 
Contents: The Landmark block specifies the type, code and position of landmarks in the face image. If the Landmark blocks element is present, it shall contain at least one Landmark block. A Landmark block consists of the Landmark kind element and the Landmark coordinates block. The structure of this element is shown in Figure 1. 
Landmarks can be specified as MPEG-4 feature points as given by ISO/IEC 14496-2:2004, Annex C or as anthropometric landmarks. The description of the anthropometric landmarks [ 2 ] [ 2 ] ^([2]){ }^{[2]} and their relation with the set of MPEG4 feature points is discussed in Table 2.
地標可指定為 ISO/IEC 14496-2:2004 附錄 C 所定義的 MPEG-4 特徵點,或指定為人體測量學地標。表 2 將討論人體測量學地標 [ 2 ] [ 2 ] ^([2]){ }^{[2]} 的描述及其與 MPEG4 特徵點集合的關聯。

7.26 Landmark kind  7.26 地標類型

Abstract values: MPEG4 feature point or anthropometric landmark.
抽象值:MPEG4 特徵點或人體測量學地標。

Contents: The Landmark kind shall either be MPEG4 feature point or anthropometric landmark. The Landmark code shall specify the landmark that is stored in the Landmark block element. The MPEG4 feature points are extended by eye and nostril landmarks
內容:地標類型應為 MPEG4 特徵點或人體測量學地標。地標代碼應指定儲存於地標區塊元素中的地標。MPEG4 特徵點已擴展包含眼部與鼻孔地標
References to right and left shall be taken from the perspective of the subject contained within an image. References to right shall mean the right side of the body from the perspective of the subject. References to the left shall mean the left side of the body from the perspective of the subject.
左右方向的參照應以影像中主體的視角為準。所謂「右」是指從主體視角所見的身體右側,「左」則是指從主體視角所見的身體左側。

7.27 MPEG4 feature point
7.27 MPEG4 特徵點

Abstract values: See Figure 5 and Figure 6.
抽象數值:參見圖 5 與圖 6。

Contents: Figure 5 denotes the landmark codes associated with feature points as given by ISO/ IEC 14496-2:2004, Annex C. Each landmark can be written in the form A.B using a major ( A ) ( A ) (A)(\mathrm{A}) and a minor ( B ) ( B ) (B)(\mathrm{B}) value. Eye and nostril landmarks are contained as an addition to the MPEG4 feature points.
內容說明:圖 5 標示了 ISO/IEC 14496-2:2004 附錄 C 所定義之特徵點對應的地標代碼。各地標代碼可採用主值 ( A ) ( A ) (A)(\mathrm{A}) 與次值 ( B ) ( B ) (B)(\mathrm{B}) 組合的 A.B 形式表示。眼部與鼻孔地標為 MPEG4 特徵點的增補項目。

Key  關鍵點
  • feature points affected by face animation parameters (FAPs) as specified in ISO/IEC 14496-2
    受臉部動畫參數(FAPs)影響的特徵點,如 ISO/IEC 14496-2 所規範
  • other feature points  其他特徵點
Figure 5 - Feature points as specified in ISO/IEC 14496-2
圖 5 - ISO/IEC 14496-2 規範之特徵點
The eye centre landmarks 12.1 (left) and 12.2 (right) are defined to be the horizontal and vertical midpoints of the eye corners (3.7, 3.11) and (3.8, 3.12) respectively. The left nostril centre landmark 12.3 is defined to be the midpoint of the nose landmarks ( 9.1 , 9.15 9.1 , 9.15 9.1,9.159.1,9.15 ) in the horizontal direction and ( 9.3 , 9.15 9.3 , 9.15 9.3,9.159.3,9.15 ) in the vertical direction. Similarly, the right nostril centre landmark 12.4 is defined to be the midpoint of the nose landmarks ( 9.2 , 9.15 9.2 , 9.15 9.2,9.159.2,9.15 ) in the horizontal direction and ( 9.3 , 9.15 9.3 , 9.15 9.3,9.159.3,9.15 ) in the vertical direction. Both the eye centre and nostril centre landmarks are shown in Figure 6 and values given in Table 1.
眼睛中心地標 12.1(左)與 12.2(右)分別定義為眼角(3.7、3.11)和(3.8、3.12)的水平與垂直中點。左鼻孔中心地標 12.3 定義為鼻子地標( 9.1 , 9.15 9.1 , 9.15 9.1,9.159.1,9.15 )水平方向與( 9.3 , 9.15 9.3 , 9.15 9.3,9.159.3,9.15 )垂直方向的中點。同理,右鼻孔中心地標 12.4 定義為鼻子地標( 9.2 , 9.15 9.2 , 9.15 9.2,9.159.2,9.15 )水平方向與( 9.3 , 9.15 9.3 , 9.15 9.3,9.159.3,9.15 )垂直方向的中點。眼睛中心與鼻孔中心地標皆顯示於圖 6,數值列於表 1。

Key  關鍵

  • feature points affected by FAPs
    受 FAPs 影響的特徵點
  • other feature points  其他特徵點
The landmarks 12.1, 12.2, 12.3, and 12.4 are defined to be the midpoints of MPEG features.
地標 12.1、12.2、12.3 和 12.4 被定義為 MPEG 特徵的中點。
Figure 6 - Eye and nostril centre landmarks
圖 6 - 眼睛與鼻孔中心地標
Table 1 - Eye and nostril centre landmark codes
表 1 - 眼睛與鼻孔中心地標代碼
Centre landmark  中心地標 Midpoint of landmarks  地標中點 Landmark code  地標代碼
Left eye  左眼 3.7 , 3.11 3.7 , 3.11 3.7,3.113.7,3.11 12.1
Right eye  右眼 3.8 , 3.12 3.8 , 3.12 3.8,3.123.8,3.12 12.2
Left nostril  左鼻孔 Horizontal  水準 Vertical  垂直 12.3
9.1 , 9.15 9.1 , 9.15 9.1,9.159.1,9.15 9.3 , 9.15 9.3 , 9.15 9.3,9.159.3,9.15
Horizontal  水準 Vertical  垂直 12.4
9.2 , 9.15 9.2 , 9.15 9.2,9.159.2,9.15 9.3 , 9.15 9.3 , 9.15 9.3,9.159.3,9.15
Centre landmark Midpoint of landmarks Landmark code Left eye 3.7,3.11 12.1 Right eye 3.8,3.12 12.2 Left nostril Horizontal Vertical 12.3 9.1,9.15 9.3,9.15 Horizontal Vertical 12.4 9.2,9.15 9.3,9.15 | Centre landmark | Midpoint of landmarks | | Landmark code | | :--- | :--- | :--- | :--- | | Left eye | $3.7,3.11$ | 12.1 | | | Right eye | $3.8,3.12$ | 12.2 | | | Left nostril | Horizontal | Vertical | 12.3 | | | $9.1,9.15$ | $9.3,9.15$ | | | | Horizontal | Vertical | 12.4 | | | $9.2,9.15$ | $9.3,9.15$ | |

ISO/IEC 39794-5:2019(E)

7.28 Anthropometric landmark
7.28 人體測量標記點

Abstract values: See Table 2.
抽象值:參見表 2

Contents: Anthropometric landmarks denote feature points that are used in forensics and anthropology for recognition of individuals via two face images or image and skull over a long time. They also allow specification of points that are in use by criminal examiners and anthropologists [ 2 ] [ 2 ] ^([2]){ }^{[2]}.
內容:人體測量學地標是指用於法醫學和人類學中,透過兩張臉部圖像或長時間的圖像與頭骨進行個體識別的特徵點。這些點也允許刑事鑑識人員和人類學家指定使用的特徵點 [ 2 ] [ 2 ] ^([2]){ }^{[2]}
Figure 7 and Table 2 show the definition of the Anthropometric landmarks. The set of points represents the craniofacial landmarks of the head and face. The latter are used in forensics for “face to face” and “skull to face” identification. Some of these points have MPEG 4 counterparts, others not.
圖 7 和表 2 展示了人體測量學地標的定義。這組點代表頭部和臉部的顱面地標。後者在法醫學中用於「臉對臉」和「頭骨對臉」的識別。其中一些點有對應的 MPEG 4 標準點,其他則沒有。

There are three different possibilities to encode an Anthropometric landmark:
有三種不同的方式來編碼一個人體測量學地標:

Firstly, each Anthropometric landmark may be notated in the form A.B. A specifies the global landmark of the face to which this landmark belongs such as nose, mouth, etc. B specifies the particular point. In case a landmark has two symmetrical entities (left and right) the right entity always has a greater and even minor code value. Hence, all landmarks from the left part of the face have odd minor codes, and from the right part - even minor codes.
首先,每個人體測量學地標可以以 A.B 的形式標記。A 指定該地標所屬的臉部全局地標,例如鼻子、嘴巴等。B 指定特定點。如果一個地標有兩個對稱實體(左和右),右側實體總是具有較大且偶數的次要代碼值。因此,臉部左側的所有地標具有奇數次要代碼,而右側則為偶數次要代碼。

Secondly, each Anthropometric landmark may be notated by its name. In case a landmark has two symmetrical entities (left and right) a “left” or “right” shall be added to the names in Table 2.
其次,每個體測地標可以透過其名稱來標記。若某個地標具有左右對稱的兩個實體(左側與右側),則應在表 2 的名稱中加上「左」或「右」。

Thirdly, each Anthropometric landmark may be notated by its point identifier. In case a landmark has two symmetrical entities (left and right) a “left” or “right” shall be added to the names in Table 2.
第三,每個體測地標可以透過其點識別碼來標記。若某個地標具有左右對稱的兩個實體(左側與右側),則應在表 2 的名稱中加上「左」或「右」。

Key  關鍵

  • landmarks without MPEG4 counterpart
    無 MPEG4 對應項之地標
  • landmarks with MPEG4 counterpart
    具有 MPEG4 對應項目的地標
Figure 7 - Anthropometric landmarks
圖 7 - 人體測量地標
Table 2 - Definitions of the anthropometric landmarks
表 2 - 人體測量地標定義
Anthropometric landmark point identifier
人體測量地標點識別碼
Anthropometric landmark point name
人體測量標記點名稱
MPEG4 Anthropometric landmark name
人體測量標記名稱
How to point  標記方法
v 1.1 11.4 vertex The highest point of head when the head is oriented in Frankfurt Horizon
頭部在法蘭克福水平面定向時的最高點
g 1.2 glabella  眉間 The most prominent middle point between the eyebrows
眉毛之間最突出的中點
op  枕點 1.3 opisthocranion  後頭點 Situated in the occipital region of the head is most distant from the glabella
位於頭部枕骨區域,距離眉間最遠的點
eu 1.5, 1.6 eurion  耳點 The most prominent lateral point on each side of the skull in the area of the parietal and temporal bones
頭顱頂骨與顳骨區域兩側最突出的側點
ft  英尺 1.7, 1.8 frontotemporale The point on each side of the forehead, laterally from the elevation of the linea temporalis
前額兩側的點,位於顳線隆起處外側
tr  髮際線 1.9 11.1 trichion  額中髮際點 The point on the hairline in the midline of the forehead
前額中線上的髮際點
zy 2.1, 2.2 zygion  顴點 The most lateral point of each of the zygomatic
每側顴骨的最外側點
go  下頜角點 2.3,2.4 2.13, 2.14 gonion  下頜角 The most lateral point on the mandibural angle close to the bony gonion
下頜角最外側點,靠近骨性下頜角
Anthropometric landmark point identifier Anthropometric landmark point name MPEG4 Anthropometric landmark name How to point v 1.1 11.4 vertex The highest point of head when the head is oriented in Frankfurt Horizon g 1.2 glabella The most prominent middle point between the eyebrows op 1.3 opisthocranion Situated in the occipital region of the head is most distant from the glabella eu 1.5, 1.6 eurion The most prominent lateral point on each side of the skull in the area of the parietal and temporal bones ft 1.7, 1.8 frontotemporale The point on each side of the forehead, laterally from the elevation of the linea temporalis tr 1.9 11.1 trichion The point on the hairline in the midline of the forehead zy 2.1, 2.2 zygion The most lateral point of each of the zygomatic go 2.3,2.4 2.13, 2.14 gonion The most lateral point on the mandibural angle close to the bony gonion| Anthropometric landmark point identifier | Anthropometric landmark point name | MPEG4 | Anthropometric landmark name | How to point | | :--- | :--- | :--- | :--- | :--- | | v | 1.1 | 11.4 | vertex | The highest point of head when the head is oriented in Frankfurt Horizon | | g | 1.2 | | glabella | The most prominent middle point between the eyebrows | | op | 1.3 | | opisthocranion | Situated in the occipital region of the head is most distant from the glabella | | eu | 1.5, 1.6 | | eurion | The most prominent lateral point on each side of the skull in the area of the parietal and temporal bones | | ft | 1.7, 1.8 | | frontotemporale | The point on each side of the forehead, laterally from the elevation of the linea temporalis | | tr | 1.9 | 11.1 | trichion | The point on the hairline in the midline of the forehead | | zy | 2.1, 2.2 | | zygion | The most lateral point of each of the zygomatic | | go | 2.3,2.4 | 2.13, 2.14 | gonion | The most lateral point on the mandibural angle close to the bony gonion |

ISO/IEC 39794-5:2019(E)

Table 2 (continued)  表 2(續)
Anthropometric landmark point identifier
人體測量地標點識別碼
Anthropometric landmark point name
人體測量標記點名稱
MPEG4 Anthropometric landmark name
人體測量標記名稱
How to point  標記方法
sl 2.5 sublabiale  下唇下緣 Determines the lower border of the lower lip or the upper border of the chin
決定下唇的下邊界或下巴的上邊界
pg 2.6 2.10 pogonion  頦前點 The most anterior midpoint of the chin, located on the skin surface in the front of the identical bony landmark of the mandible
下巴最前端的中心點,位於下頜骨相同骨性地標前方的皮膚表面
gn 2.7 2.1 menton (or gnathion)  頦點(或頦下點) The lowest median landmark on the lower border of the mandible
下頜骨下緣最低的中線地標
cdl 2.9, 2.10 condylion laterale  髁突外側點 The most lateral point on the surface of the condyle of the mandible
下頜骨髁突表面最外側的點
en 3.1, 3.2 3.11, 3.8 endocanthion  內眼角點 The point at the inner commissure of the eye fissure
眼裂內側交會處的點
ex   3.3, 3.4 3.7, 3.12 exocanthion (or ectocanthion)
外眼角點(或稱外眥點)
The point at the outer commissure of the eye fissure 
p 3.5, 3.6 3.5, 3.6 centre point of pupil  Is determined when the head is in the rest position and the eye is looking straight forward 
or  3.7, 3.8 3.9, 3.10 orbitale  The lowest point on the lower margin of each orbit 
ps  3.9, 3.10 3.1, 3.2 palpebrale superius  The highest point in the midportion of the free margin of each upper eyelid 
pi  3.11, 3.12 3.3, 3.4 palpebrale inferius  The lowest point in the midportion of the free margin of each lower eyelid 
os  4.1, 4.2 orbitale superius  The highest point on the lower border of the eyebrow 
sci  4.3, 4.4 4.3, 4.4 superciliare  The highest point on the upper border in the midportion of each eyebrow 
n 5.1 nasion  The point in the middle of both the nasal root and nasofrontal suture 
se  5.2 sellion (or subnasion)  Is the deepest landmark located on the bottom of the nasofrontal angle 
al  5.3, 5.4 9.1, 9.2 alare  The most lateral point on each alar contour 
prn  5.6 9.3 pronasale  The most protruded point of the apex nasi 
sn  9.15 subnasale  The craniometric point at the base of the nasal (nose) spine 
sbal  subalare 
ac  9.1, 9.2 alar curvature (or alar crest) point  The nasal alar crest 
mf  9.6, 9.7 maxillofrontale 
cph  8.9, 8.10 christa philtra landmark  The point on the crest of the philtrum, the vertical groove in the median portion of the upper lip, just above the vermillion border (sharp demarcation between the lip and the adjacent normal skin) 
Anthropometric landmark point identifier Anthropometric landmark point name MPEG4 Anthropometric landmark name How to point sl 2.5 sublabiale Determines the lower border of the lower lip or the upper border of the chin pg 2.6 2.10 pogonion The most anterior midpoint of the chin, located on the skin surface in the front of the identical bony landmark of the mandible gn 2.7 2.1 menton (or gnathion) The lowest median landmark on the lower border of the mandible cdl 2.9, 2.10 condylion laterale The most lateral point on the surface of the condyle of the mandible en 3.1, 3.2 3.11, 3.8 endocanthion The point at the inner commissure of the eye fissure ex 3.3, 3.4 3.7, 3.12 exocanthion (or ectocanthion) The point at the outer commissure of the eye fissure p 3.5, 3.6 3.5, 3.6 centre point of pupil Is determined when the head is in the rest position and the eye is looking straight forward or 3.7, 3.8 3.9, 3.10 orbitale The lowest point on the lower margin of each orbit ps 3.9, 3.10 3.1, 3.2 palpebrale superius The highest point in the midportion of the free margin of each upper eyelid pi 3.11, 3.12 3.3, 3.4 palpebrale inferius The lowest point in the midportion of the free margin of each lower eyelid os 4.1, 4.2 orbitale superius The highest point on the lower border of the eyebrow sci 4.3, 4.4 4.3, 4.4 superciliare The highest point on the upper border in the midportion of each eyebrow n 5.1 nasion The point in the middle of both the nasal root and nasofrontal suture se 5.2 sellion (or subnasion) Is the deepest landmark located on the bottom of the nasofrontal angle al 5.3, 5.4 9.1, 9.2 alare The most lateral point on each alar contour prn 5.6 9.3 pronasale The most protruded point of the apex nasi sn 9.15 subnasale The craniometric point at the base of the nasal (nose) spine sbal subalare ac 9.1, 9.2 alar curvature (or alar crest) point The nasal alar crest mf 9.6, 9.7 maxillofrontale cph 8.9, 8.10 christa philtra landmark The point on the crest of the philtrum, the vertical groove in the median portion of the upper lip, just above the vermillion border (sharp demarcation between the lip and the adjacent normal skin)| Anthropometric landmark point identifier | Anthropometric landmark point name | MPEG4 | Anthropometric landmark name | How to point | | :--- | :--- | :--- | :--- | :--- | | sl | 2.5 | | sublabiale | Determines the lower border of the lower lip or the upper border of the chin | | pg | 2.6 | 2.10 | pogonion | The most anterior midpoint of the chin, located on the skin surface in the front of the identical bony landmark of the mandible | | gn | 2.7 | 2.1 | menton (or gnathion) | The lowest median landmark on the lower border of the mandible | | cdl | 2.9, 2.10 | | condylion laterale | The most lateral point on the surface of the condyle of the mandible | | en | 3.1, 3.2 | 3.11, 3.8 | endocanthion | The point at the inner commissure of the eye fissure | | ex | 3.3, 3.4 | 3.7, 3.12 | exocanthion (or ectocanthion) | The point at the outer commissure of the eye fissure | | p | 3.5, 3.6 | 3.5, 3.6 | centre point of pupil | Is determined when the head is in the rest position and the eye is looking straight forward | | or | 3.7, 3.8 | 3.9, 3.10 | orbitale | The lowest point on the lower margin of each orbit | | ps | 3.9, 3.10 | 3.1, 3.2 | palpebrale superius | The highest point in the midportion of the free margin of each upper eyelid | | pi | 3.11, 3.12 | 3.3, 3.4 | palpebrale inferius | The lowest point in the midportion of the free margin of each lower eyelid | | os | 4.1, 4.2 | | orbitale superius | The highest point on the lower border of the eyebrow | | sci | 4.3, 4.4 | 4.3, 4.4 | superciliare | The highest point on the upper border in the midportion of each eyebrow | | n | 5.1 | | nasion | The point in the middle of both the nasal root and nasofrontal suture | | se | 5.2 | | sellion (or subnasion) | Is the deepest landmark located on the bottom of the nasofrontal angle | | al | 5.3, 5.4 | 9.1, 9.2 | alare | The most lateral point on each alar contour | | prn | 5.6 | 9.3 | pronasale | The most protruded point of the apex nasi | | sn | | 9.15 | subnasale | The craniometric point at the base of the nasal (nose) spine | | sbal | | | subalare | | | ac | | 9.1, 9.2 | alar curvature (or alar crest) point | The nasal alar crest | | mf | | 9.6, 9.7 | maxillofrontale | | | cph | | 8.9, 8.10 | christa philtra landmark | The point on the crest of the philtrum, the vertical groove in the median portion of the upper lip, just above the vermillion border (sharp demarcation between the lip and the adjacent normal skin) |
Table 2 (continued) 
Anthropometric landmark point identifier  Anthropometric landmark point name  MPEG4 Anthropometric landmark name  How to point 
ls  8.1 labiale (or labrale) superius  The mid point of the vermillion border of the upper lip 
li  8.2 labiale (or labrale) inferius  The mid point of the vermillion border of the lower lip 
ch  8.3, 8.4 cheilion  The outer corner of the mouth where the outer edges of the upper and lower vermillions meet 
sto  stomion  The median point of the oral slit when the lips are closed 
sa  10.1, 10.2 superaurale  The furthermost point of the ear lobe when measured from the sba landmark 
sba  10.5, 10.6 subaurale  The lowest point on the inferior (lower) border of the ear lobule when the subject is looking straight ahead 
pra  10.9, 10.10 preaurale  The point between obs and abi opposite to pa 
pa  postaurale  The most posterior point on the free margin of the ear 
obs  10.3, 10.4 otobasion superius  The highest point of attachment of the external ear to the head 
obi  obotasion inferius  The lowest point of attachment of the external ear to the head 
po  porion (soft)  The central point on the upper margin of the external auditory meatus (passage in the ear) 
t tragion  A cephalometric point in the notch just above the tragus (small tonguelike projection of the auricular cartilage) of the ear 
Anthropometric landmark point identifier Anthropometric landmark point name MPEG4 Anthropometric landmark name How to point ls 8.1 labiale (or labrale) superius The mid point of the vermillion border of the upper lip li 8.2 labiale (or labrale) inferius The mid point of the vermillion border of the lower lip ch 8.3, 8.4 cheilion The outer corner of the mouth where the outer edges of the upper and lower vermillions meet sto stomion The median point of the oral slit when the lips are closed sa 10.1, 10.2 superaurale The furthermost point of the ear lobe when measured from the sba landmark sba 10.5, 10.6 subaurale The lowest point on the inferior (lower) border of the ear lobule when the subject is looking straight ahead pra 10.9, 10.10 preaurale The point between obs and abi opposite to pa pa postaurale The most posterior point on the free margin of the ear obs 10.3, 10.4 otobasion superius The highest point of attachment of the external ear to the head obi obotasion inferius The lowest point of attachment of the external ear to the head po porion (soft) The central point on the upper margin of the external auditory meatus (passage in the ear) t tragion A cephalometric point in the notch just above the tragus (small tonguelike projection of the auricular cartilage) of the ear| Anthropometric landmark point identifier | Anthropometric landmark point name | MPEG4 | Anthropometric landmark name | How to point | | :--- | :--- | :--- | :--- | :--- | | ls | | 8.1 | labiale (or labrale) superius | The mid point of the vermillion border of the upper lip | | li | | 8.2 | labiale (or labrale) inferius | The mid point of the vermillion border of the lower lip | | ch | | 8.3, 8.4 | cheilion | The outer corner of the mouth where the outer edges of the upper and lower vermillions meet | | sto | | | stomion | The median point of the oral slit when the lips are closed | | sa | | 10.1, 10.2 | superaurale | The furthermost point of the ear lobe when measured from the sba landmark | | sba | | 10.5, 10.6 | subaurale | The lowest point on the inferior (lower) border of the ear lobule when the subject is looking straight ahead | | pra | | 10.9, 10.10 | preaurale | The point between obs and abi opposite to pa | | pa | | | postaurale | The most posterior point on the free margin of the ear | | obs | | 10.3, 10.4 | otobasion superius | The highest point of attachment of the external ear to the head | | obi | | | obotasion inferius | The lowest point of attachment of the external ear to the head | | po | | | porion (soft) | The central point on the upper margin of the external auditory meatus (passage in the ear) | | t | | | tragion | A cephalometric point in the notch just above the tragus (small tonguelike projection of the auricular cartilage) of the ear |

7.29 Landmark coordinates block 

Abstract values: None. 
Contents: The Landmark coordinates block shall contain the coordinates of the associated landmark in the 2D Cartesian coordinate system (in case of 2D image representation block existence), in a Coordinate texture image block, or in a 3D Cartesian coordinate system (in case of 3D image representation block existence). 
In 2D Image representation blocks, the Z coordinate of the Cartesian coordinate system is not used. This element shall then contain the horizontal and vertical position of the associated landmark. They are measured in pixels with values from 0 to width- 1 and from 0 to height-1. The Coordinate texture image block consists of the two integer values uInPixel and vInPixel. In 3D Shape representation blocks, the X, Y, and Z coordinates are mandatory and defined in the 3D Cartesian coordinate system. The X , Y X , Y X,YX, Y, and Z coordinates are non-negative integers. The landmarks are converted to metric Cartesian coordinates using the Cartesian scales and offsets block. The error of the Z coordinate of an anthropometric landmark location should be no greater than 3 mm . The point shall withstand from the nearest point on the surface no further than 3 mm . 

7.30 Image representation block 

Abstract values: Either 2D image representation block, or 3D shape representation block. 
Contents: The Image representation block contains the image data and metadata. It is either a 2D image representation block or a 3D shape representation block. 

7.31 2D image representation block 

Abstract values: None. 
Contents: The 2D image representation block contains the 2D representation data, the 2D image information block, and the 2D capture device block. 

7.32 2D representation data 

Abstract values: Octet string. 
Contents: The 2D representation data element shall contain the encoded image data in accordance with the value of the Image data format element. 

7.33 2D capture device block 

Abstract values: None. 
Contents: The 2D capture device block consists of the 2D capture device spectral block and the 2D capture device technology identifier. 

7.34 2D capture device spectral block 

Abstract values: The possible values are: 
  • near infrared; 
  • thermal; 
  • white light. 
Contents: Many different types of capture devices work in the near infrared, thermal, or white light spectral range. The 2D capture device spectral block indicates whether the capture device technology uses one or more of these spectral ranges. 

7.35 2D capture device technology identifier 

Abstract values: The possible values are: 
  • unknown; 
  • static photograph from an unknown source; 
  • static photograph from a digital still-image camera; 
  • static photograph from a scanner; 
  • video frame(s) from an unknown source; 
  • video frame(s) from an analogue video camera; 
  • video frame(s) from a digital video camera. 
Contents: The 2D capture device technology identifier shall indicate the device technology used to acquire the captured biometric sample. 

7.36 2D image information block 

Abstract values: None. 
Contents: The 2D image information block element is intended to describe digital properties of the 2D representation data. 
The 2D image information block consists of the Image data format, the 2D face image kind, the Post-acquisition processing block, the Lossy transformation attempts element, the Camera to subject distance, the Sensor diagonal, the Lens focal length, the Image size block, the Image face measurements block, the Image colour space element, and the Reference colour mapping block. The structure of this element is shown in Figure 1. 

7.37 2D face image kind 

Abstract values: See Table 3 for a list of allowed 2D face image kinds and their normative requirements. Other application specific image types may be added in the future. 
Contents: The 2D face image kind element shall represent the type of the face image stored in the 2D representation data. There are several types according to the chosen application specific profile (see Annex D), additional profiles may be included in future versions of this document. 
Table 3-2D face image kinds 
Value  Definition and normative requirements 
MRTD Annex D.1 
General purpose  Annex D.2 
Value Definition and normative requirements MRTD Annex D.1 General purpose Annex D.2| Value | Definition and normative requirements | | :--- | :--- | | MRTD | Annex D.1 | | General purpose | Annex D.2 |

7.38 Post acquisition processing block 

Abstract values: The values of this block shall be one or more of the following: 
  • rotated (in- plane); 
  • cropped; 
  • down-sampled; 
  • white balance adjusted; 
  • multiply compressed; 
  • interpolated; 
  • contrast stretched; 
  • pose corrected; 
  • multi view image; 
  • age progressed; 
  • super-resolution processed; 
  • normalised. 
There may be restrictions on the allowed values by the choice of the 2D face image kind. 
Contents: This element contains notifications on potential post acquisition processing steps. 
While the alteration of face image data is discouraged, there are cases when no alternative may exist: 
  • Legacy database of 3 / 4 3 / 4 3//43 / 4 frontal face images which shall be rotated to full frontal prior to biometric comparison. 
  • Froma frontal image artificial non-frontal face images are automatically generated at predetermined non-frontal poses (multi-view images) using an implicit head model or similar. These images can be beneficial during the comparison process or a manual review process as they show a more similar pose than the original frontal image. 
  • A single image is to be age progressed and used for verification of a passport holder. 
  • A short video stream is super-resolved to a single face image for comparison against a watch list. 
The Post acquisition processing block allows the specification of the kind of post processing that has been applied to the original captured image. 
On the one hand a captured image might need some post-processing so that the resulting representation conforms to the requirements of this document. On the other hand, these processing steps should be minimal and not distort the characteristics of the original image. 

7.39 Lossy transformation attempts 

Abstract values: Unknown, 0,1 , more than 1 . 
Contents: This element counts the number of previous lossy transformation steps. 

7.40 Image data format 

Abstract values: The values shall be specified according to Table 4. 
Contents: The Image data format denotes the encoding type of the 2D representation data and of the 3D texture map. 
For lossless compression PNG or JPEG2000 lossless shall be used. For lossless representation of images using more than 8 bits per channel PNG or JPEG2000 lossless shall be used. For lossy representation of images using more than eight bit per channel JPEG2000 shall be used. For an encoding in Netpbm portable binary the image formats P5 (grey, PGM) and P6 (colour, PPM) shall be used. 
Table 4 - Image data format codes 
Value  Specified in 
unknown 
other 
jpeg  ITU-T Rec. T. 81 | ISO/IEC 10918-1 and Reference [3] 
jpeg2000 lossy jpeg2000 lossless  ISO/IEC 15444-1 
png  ISO/IEC 15948 
pgm  Reference [33] 
ppm  Reference [34] 
Value Specified in unknown other jpeg ITU-T Rec. T. 81 | ISO/IEC 10918-1 and Reference [3] jpeg2000 lossy jpeg2000 lossless ISO/IEC 15444-1 png ISO/IEC 15948 pgm Reference [33] ppm Reference [34]| Value | Specified in | | :--- | :--- | | unknown | | | other | | | jpeg | ITU-T Rec. T. 81 \| ISO/IEC 10918-1 and Reference [3] | | jpeg2000 lossy jpeg2000 lossless | ISO/IEC 15444-1 | | png | ISO/IEC 15948 | | pgm | Reference [33] | | ppm | Reference [34] |
If the Image data format value is unknown or other or a later-version extension code, then the Image size block (with width and height) shall be included. 
In the event that a greyscale face image is encoded in the Netpbm portable greyscale binary image format (PGM), the format definition is as follows: 
  1. a “magic number” = “P5” for identifying the file type followed by: 
  2. any Whitespace (blanks, TABs, CRs, LFs); 
  3. a width, formatted as ASCII characters in decimal; 
  4. any Whitespace (blanks, TABs, CRs, LFs); 
  5. a height, formatted as ASCII characters in decimal; 
  6. any Whitespace (blanks, TABs, CRs, LFs); 
  7. the maximum grey value (Maxval), formatted as ASCII characters in decimal the value shall be smaller than 256, and larger than zero; 
  8. a single Whitespace character (usually a newline); 
  9. a raster of Height rows, in order from top to bottom. Each row consists of Width grey values, in order from left to right. Each grey value is a number from 0 through Maxval, with 0 being black and Maxval being white. Each grey value is represented in pure binary by either 1 or 2 bytes. If the Maxval is less than 256, it is 1 byte. Otherwise, it is 2 bytes. The most significant byte is first. 
A PGM encoded greyscale face image shall be encoded in a P5 format. 
In the event that a colour face image is encoded in the Netpbm portable colour binary image format (PPM), the format definition is as follows: 
  1. a “magic number” = “P6” for identifying the file type followed by: 
  2. any Whitespace (blanks, TABs, CRs, LFs); 
  3. a width, formatted as ASCII characters in decimal; 
  4. any Whitespace (blanks, TABs, CRs, LFs); 
  5. a height, formatted as ASCII characters in decimal; 
  6. any Whitespace (blanks, TABs, CRs, LFs); 
  7. the maximum channel value (Maxval), formatted as ASCII characters in decimal - the value shall be smaller than 256, and larger than zero; 
  8. a single Whitespace character (usually a newline); 
  9. a raster of Height rows, in order from top to bottom. Each row consists of Width pixel values, in order from left to right. Each pixel value is represented by 1 number for red, 1 number for green and 1 number for blue, each from 0 through Maxval; thus each pixel value is represented in pure binary by 3 bytes. 
A PPM encoded colour face image shall be encoded in a P6 format. 

7.41 Camera to subject distance 

Abstract values: Integer. 
Contents: The Camera to subject distance (CSD) element contains the camera to subject distance of the photographical setup used for capturing the photo in millimetres. The maximum CSD to be encoded is 50000 mm . All larger distances shall by encoded using that maximum value. 

7.42 Sensor diagonal 

Abstract values: Integer. 
Contents: The Sensor diagonal element contains the diagonal length of the camera sensor used for capturing the photo in millimetres. The maximum Sensor diagonal to be encoded is 2000 mm . All larger distances shall by encoded using that maximum value. If a zoom lens is used, this data element shall encode the actual focal length used to capture the image. 
Figure 8 illustrates the relative sizes of some commonly available image sensors. Table 5 provides the approximate widths, heights, areas, diagonals, and crop factors for these sensors. The dimensions in Table 5 are approximates and serve as examples. 
Figure 8 - Typical sensors and their relation in size to the traditional full frame 
It might be noted that, by gathering more light, a larger image sensor will provide typically lower image noise, and a fixed focal length lens will generally provide a higher image quality than a zoom lens of the same focal length. Moreover, by using a fixed focal length lens, the problem of inadvertent change to the zoom ratio (i.e., the field of view) can be avoided. 

ISO/IEC 39794-5:2019(E)

Table 5 - Typical image sensor sizes and corresponding crop factors 
Sensor type  Width (mm)  Height (mm)  Area ( mm 2 mm 2 mm^(2)\mathrm{mm}^{2} )  Diagonal (mm)  Crop factor 
Full frame  36 24 864 43,3 1
APS-H 28,7 19 545 34,4 1,26
APS-C 25,1 16,7 419 30,1 1,44
Four thirds system (4/3)  17,3 13 225 21,6 2,00
1 inch (1")  13,2 8,8 116 15,9 2,73
Sensor type Width (mm) Height (mm) Area ( mm^(2) ) Diagonal (mm) Crop factor Full frame 36 24 864 43,3 1 APS-H 28,7 19 545 34,4 1,26 APS-C 25,1 16,7 419 30,1 1,44 Four thirds system (4/3) 17,3 13 225 21,6 2,00 1 inch (1") 13,2 8,8 116 15,9 2,73| Sensor type | Width (mm) | Height (mm) | Area ( $\mathrm{mm}^{2}$ ) | Diagonal (mm) | Crop factor | | :--- | :--- | :--- | :--- | :--- | :--- | | Full frame | 36 | 24 | 864 | 43,3 | 1 | | APS-H | 28,7 | 19 | 545 | 34,4 | 1,26 | | APS-C | 25,1 | 16,7 | 419 | 30,1 | 1,44 | | Four thirds system (4/3) | 17,3 | 13 | 225 | 21,6 | 2,00 | | 1 inch (1") | 13,2 | 8,8 | 116 | 15,9 | 2,73 |

7.43 Lens focal length 

Abstract values: Integer. 
Contents: The Lens focal length element contains the focal length of the camera lens used for capturing the photo in millimetres. The maximum Lens focal length to be encoded is 2000 mm . All larger distances shall by encoded using that maximum value. 

7.44 Image size block 

Abstract values: None. 
Contents: The Image size block consists of the Width and the Height element. 

7.45 Width 

Abstract values: Integer. 
Contents: The Width element shall specify the number of pixels of the 2D representation data in the horizontal direction. 

7.46 Height 

Abstract values: Integer. 
Contents: The Height element shall specify the number of pixels of the 2D representation data in the vertical direction. 

7.47 Image face measurements block 

Abstract values: None. 
Contents: For specific application domains different minimal spatial sampling rates of the interchange data may be required. For example, using higher spatial sampling rate images allow for specific human as well as machine inspection methods that depend on the analysis of very small details. 
The Image face measurements block consists of four elements. If the number of pixels across the width of the head shall be encoded the Image head width may be used. If the number of pixels across the length of the head shall be encoded the Image head length may be used. If the inter-eye distance shall be encoded the Image inter-eye distance data element may be used. If the eye-to-mouth distance shall be encoded the Image eye-to-mouth distance data element may be used. If necessary, all four elements may be used. 

7.48 Image head width 

Abstract values: Integer. 
Contents: The Image head width element provides information on the number of pixels in the image across the width of the head. The head width ( W ) is defined in Figure 9. 

a) Abstract geometric characteristics 

b) Sample 

Key 

A image width 
B image height 
W head width 
L head length 
V vertical centre line 
H horizontal centre line 
M face centre 
Figure 9 - Abstract geometric characteristics of a portrait applied to a sample 
NOTE The typical inter-eye distance is approximately half of the head width. 

7.49 Image inter-eye distance 

Abstract values: Integer. 
Contents: The Image inter-eye distance element provides information on the number of pixels in the image between the eye centres (feature points 12.1 and 12.2). For an explanation of the inter-eye distance see Figure 10. The value of this element shall be the number of pixels between the eye centres. 
Key 
1 eye centre 
2 inner canthus 
3 outer canthus 
4 Inter-eye distance 
Figure 10 - Inter-eye distance (IED) measurement 
NOTE Be aware that the eye centre is not necessarily the centre of the pupil. 
NOTE A typical real IED (distance measured at the face) is between 60 mm and 65 mm . 

7.50 Image eye-to-mouth distance 

Abstract values: Integer. 
Contents: The Image eye-to-mouth distance element provides information on the number of pixels in the image between the mouth and the eyes. The value of this element shall be the number of pixels between the midpoint of the line connecting the eye centres (feature points 12.1 and 12.2) and the mouth (feature point 2.3). 

7.51 Image head length 

Abstract values: Integer. 
Contents: The Image head length element provides information on the number of pixels in the image from the chin to crown, or length, of the head. The head length ( L ) is defined in Figure 9. The value of this element shall be the number of pixels across the length of the head. 

7.52 Image colour space 

Abstract values: The value of this element shall be one of the following: 
  • unknown; 
  • other; 
  • 24 bit RGB; 
  • 48 bit RGB; 
  • YUV422; 
  • 8 bit greyscale; 
  • 16 bit greyscale. 
Contents: The Image colour space element indicates the colour space used in the encoded 2D or 3D image information block. RGB encoding is recommended. The ICC profile should be embedded inside the Texture map data (if applicable), as JPEG and PNG formats allow ICC profile encoding. 

7.53 Reference colour mapping block 

Abstract values: None. 
Contents: Mapping of reference colours like in IEC 61966-8. This data element contains the name of the applied Reference colour schema, like IEC 61966-8, and a list of Reference colour definition and value blocks. 

7.54 Reference colour schema 

Abstract values: Octet string. 
Contents: This data element contains the name of the applied Reference colour schema, like IEC 61966-8. 

7.55 Reference colour definition and value block 

Abstract values: Two octet strings. 
Contents: These data elements contain pairs of elements consisting of a Reference colour definition like “J 14” in the IEC case, and the respective Reference colour value in the given face portrait. 

7.56 3D shape representation block 

Abstract values: None. 
Contents: The 3D shape representation block contains the 3D representation data, the 3D image information block, and the 3D capture device block. The structure of the 3D shape representation block is shown in Figure 1. 

7.57 3D representation data 

Abstract values: Octet string. 
Contents: The 3D representation data element shall contain the image data in a vertex representation. The 3D representation kind (vertex) shall be specified in the 3D representation kind element. 

7.58 3D capture device block 

Abstract values: None. 
Contents: In analogy to the 2D capture device block in the 2D image representation block, where the source of the 2D data can be coded, the 3D capture device block should be used to indicate the device that was used to acquire the 3D data. 
The 3D capture device block consists of the 3D modus element and the 3D capture device technology identifier 3D element. 
If all elements of the 3D capture device block are absent the 3D capture device block element shall be absent. 

7.59 3D modus 

Abstract values: The value of this element shall be one of the following: 
  • unknown; 
  • active; 
  • passive. 
Contents: This element describes the manner in which the 3D image is acquired. 

7.60 3D capture device technology identifier 

Abstract values: The value of this element shall be one of the following: 
  • unknown; 
  • stereoscopic scanner; 
  • moving (monochromatic) laser line; 
  • structured light; 
  • colour coded light; 
  • ToF (time of flight); 
  • shape from shading. 
Contents: This element contains information on the technology used in the capture device used. 
NOTE Some of the listed 3D capture device technology identifier abstract values are incompatible with a 3D modus value of passive. 

7.61 3D image information block 

Abstract values: None. 
Contents: The 3D image information block consists of the the 3D representation kind block, the 3D coordinate system, the 3D Cartesian scales and offsets block, the Image colour space (see 7.52), the 3D face image kind, the Image size block (see 7.44), the 3D physical face measurements block, the Post acquisition processing block (see 7.38), and the 3D texture map block. The structure of this element is shown in Figure 1. 

7.62 3D representation kind block 

Abstract values: Vertex. 
Contents: The 3D representation kind block shall contain the name of the encoding schema used for the 3D representation data, which is 3D vertex block for this version of this document. 
3D vertex block codes 3D points based on a non-regular sampling interval, typically resulting in a sparse coding. Due to variable sampling of the vertex points the vertex representation on the one hand can result in very compact representations or in a very exact representation when using many vertices. 

7.63 3D vertex block 

Abstract values: None. 
Contents: The 3D vertex block consists of at one or more 3D vertex information blocks, and one or more 3D vertex triangle data blocks. 
The Coordinate system type for vertex data shall be Cartesian. All Cartesian coordinates shall be non-negative integer. After application of Cartesian scales and offsets, the Cartesian coordinates become metric Cartesian coordinates which can be negative and positive and decimal. 
The origin of the metric Cartesian coordinates is defined. For example, this origin is linked to landmarks like the middle of the 2 eyes for the 3D textured image application profile, or like to the top of the nose. 
The scale is defined to be in conformity with the 3D textured image resolution block. 

7.64 3D vertex information block 

Abstract values: None. 
Contents: The 3D vertex information block consists of the 3D vertex coordinates block, the 3D vertex identifier, the 3D vertex normals block, the 3D vertex textures block, and the 3D error map elements. 

7.65 3D vertex coordinate block 

Abstract values: 3D coordinate Cartesian unsigned short block, see ISO/IEC 39794-1. 
Contents: The location of each vertex is represented by its X coordinate, Y coordinate, and Z coordinate. 

ISO/IEC 39794-5:2019(E)

7.66 3D vertex identifier 

Abstract values: Integer. 
Contents: This element shall obtain a unique identifier for the associated vertex. Each two vertices in a record shall have different identifiers. 
NOTE If the 3D vertex identifier is absent for a vertex, it is impossible to refer to it in the 3D vertex triangle data block. 

7.67 3D vertex normals block 

Abstract values: 3D coordinate Cartesian unsigned short block, see ISO/IEC 39794-1. 
Contents: The 3D vertex normals block contains the normal X X XX, normal Y Y YY and normal Z Z ZZ coordinate elements. 

7.68 3D vertex textures block 

Abstract values: 2D coordinate Cartesian unsigned short block, see ISO/IEC 39794-1. 
Contents: The vertex texture X and vertex texture Y fields represent the corresponding x and y pixel position in the 3D texture map block with ( 0 , 0 ) ( 0 , 0 ) (0,0)(0,0) denoting the upper left corner. 

7.69 3D error map 

Abstract values: Octet string. 
Contents: The 3D error map can be used to give further information on how the 3D data has been processed before it was stored in the 3D representation. The 3D error map shall be coded in the PNG format using an 8 bit per pixel greyscale image. The length of the map is variable, as it depends on the lossless compression algorithm. 
Pixel values t t tt in the range of 0 to 199 and 206 to 255 are reserved for future use. A value of t = 200 t = 200 t=200t=200 codes that the depth value is considered to be correct. Values of t 201 t 201 t >= 201t \geq 201 code a specific potential or corrected defect of the 3D data or the corresponding texture image. See Table 6 for an enumerated list of possible values. 
Table 6-3D error map values 
Description  Value 
Reserved for future use  0 to 199 
Depth value is considered correct  200
Depth value is interpolated, interpolation type isn't specified  201
Depth value is interpolated, linear interpolation has been used  202
Depth value is interpolated, bi-cubic interpolation has been used  203
Value of optional texture image potentially wrong (texture noisy, overexposure, etc.)  204
Value of optional texture image has been corrected by post processing (image processing)  205
Reserved for future use  206 to 255 
Description Value Reserved for future use 0 to 199 Depth value is considered correct 200 Depth value is interpolated, interpolation type isn't specified 201 Depth value is interpolated, linear interpolation has been used 202 Depth value is interpolated, bi-cubic interpolation has been used 203 Value of optional texture image potentially wrong (texture noisy, overexposure, etc.) 204 Value of optional texture image has been corrected by post processing (image processing) 205 Reserved for future use 206 to 255| Description | Value | | :--- | :--- | | Reserved for future use | 0 to 199 | | Depth value is considered correct | 200 | | Depth value is interpolated, interpolation type isn't specified | 201 | | Depth value is interpolated, linear interpolation has been used | 202 | | Depth value is interpolated, bi-cubic interpolation has been used | 203 | | Value of optional texture image potentially wrong (texture noisy, overexposure, etc.) | 204 | | Value of optional texture image has been corrected by post processing (image processing) | 205 | | Reserved for future use | 206 to 255 |

7.70 3D vertex triangle data block 

Abstract values: None. 
Contents: The 3D vertex triangle data block contains a list of triangle descriptions. Each triangle is specified by the three indices (Triangle index 1, Triangle index 2, and Triangle index 3 ) of the vertices in the vertex data list forming the triangle. The order of the vertex indices shall be counter clock wise to indicate the external face of the triangle. 

7.71 3D coordinate system 

Abstract values: 3D Cartesian coordinate system. 
Contents: This element contains information on the coordinate system used. 
Originally, 3D data is acquired in a device dependent coordinate system. Based on the knowledge about several device parameters the 3D data can be transformed in Cartesian coordinates. This transformation may involve rotation, translation and resampling. Efforts must be made to preserve the precision of the original data as intended by this document and defined by the 3D textured image resolution block. 
This document supports the Cartesian coordinate system for all encodings. 
The transformation to metric world coordinates is described by appropriate scaling factors and implicit rules (e.g. as used in the anthropometric landmark type). 

7.72 3D Cartesian coordinate system 

Abstract values: None. 
Contents: In the 3D Cartesian coordinate system, the point of origin must be defined in order to get positive encoding of XYZ coordinates. 
Figure 11 shows two examples of a metric Cartesian coordinate system. In the left, a sample of a Cartesian coordinate system with the origin on the tip of the nose is shown. The XZ plane is defined parallel to the Frankfurt Horizon. In the right, a sample of a metric Cartesian coordinate system with the origin at the middle of the two eyes is given. The XZ plane passes the horizontal gaze axis. This metric Cartesian coordinate system is used by the 3D textured face image application profile. The X axis leads from right eye to left eye, the Z axis is in Horizontal eye direction, looking straight forward in rest position. 

Key 

X, Y, Z coordinate axes 
FH Frankfurt Horizon 
0 coordinate origin 
Figure 11 - Samples of Cartesian coordinate systems 

7.73 3D Cartesian scales and offsets block 

Abstract values: Real. 
Contents: ScaleX, ScaleY, ScaleZ, OffsetX, OffsetY and OffsetZ are needed to transform digital coordinates to metric coordinates. The scale values have no dimension, the offset values are given in millimetres. 
The transformation from Cartesian coordinates to metric Cartesian coordinates is derived as follows: 
  • X = x X = x X=x^(**)X=x^{*} Scale X + X + X+X+ Offset X X XX; 
  • Y = y Y = y quad Y=y^(**)\quad Y=y^{*} Scale Y + Y + Y+Y+ OffsetY; 
  • Z = z Z = z Z=z**Z=z * ScaleZ + OffsetZ. 
There is a strong relation between anthropometric landmarks and the metric Cartesian coordinate system, as the landmarks define the origin and the orientation. 
For certain 3D face image kinds, the origin of the metric Cartesian coordinate system can be the midpoint between the left eye centre (12.1) and the right eye centre (12.2), or can be also the nose (prn). 
For certain 3D face image kinds, the orientation of the Cartesian system is linked to the pose of the head. One example is the frontal pose which is defined by the Frankfurt Horizon as the xz plane and the vertical symmetry plane as the yz plane with the z axis oriented in the direction of the face sight. Another example is the rest position (gaze looking straight forward) with the x z x z xzx z plane passing the two eye centres, and the horizontal gaze axis, vertical symmetry plane as the yz plane with the z axis oriented in the direction of the face sight. 
Large values of ScaleX, ScaleY or ScaleZ indicate a low spatial sampling rate in the respective dimension. Boundary values of ScaleX, ScaleY and ScaleZ may be strongly restricted for different 3D face image kinds. 

7.74 3D face image kind 

Abstract values: None. 
Contents: The 3D face image kind element shall represent the type of the face image stored in the 3D representation data. See Table 7 for a list of allowed image types and their normative requirements. 
Table 7 - 3D Face image kind codes 
Value  Definition and normative requirements 
3D Textured face images  Annex D.3 
Value Definition and normative requirements 3D Textured face images Annex D.3| Value | Definition and normative requirements | | :---: | :--- | | 3D Textured face images | Annex D.3 |

7.75 3D physical face measurements block 

Abstract values: None. 
Contents: For specific application domains different minimal spatial sampling rates of the interchange data may be required. For example, using higher spatial sampling rate images allow for specific human as well as machine inspection methods that depend on the analysis of very small details. 
The 3D physical face measurements block consists of four elements. If the width of the head shall be encoded the 3D physical head width may be used. If the length of the head shall be encoded the 3D physical head length may be used. If the inter-eye distance shall be encoded the Physical inter-eye distance may be used. If the eye-tomouth distance shall be encoded the 3D physical eye-to-mouth distance may be used. If necessary, all four elements may be used. All measures shall be given in millimetres. See 7.48 for equivalent definitions for pixel measurements in 2D images. 

7.76 3D physical head width 

Abstract values: Integer. 
Contents: The 3D physical head width element provides information on the width of the head in millimetres. 

7.77 3D physical inter-eye distance 

Abstract values: Integer. 
Contents: The 3D physical inter-eye distance element provides information on the distance between the eye midpoints in millimetres. 

7.78 3D physical eye-to-mouth distance 

Abstract values: Integer. 
Contents: The 3D physical eye-to-mouth distance element provides information on the distance between the mouth and the eyes in millimetres, more precise, between the midpoint of the line connecting the eye centres (feature points 12.1 and 12.2) and the mouth (feature point 2.3). 

7.79 3D physical head length 

Abstract values: Integer. 
Contents: The 3D physical head length element provides information on the distance from the chin to crown, or length, of the head, in millimetres. 

7.80 3D textured image resolution block 

Abstract values: None. 
Contents: The 3D textured image resolution block consists of MM shape [X/Y/Z] resolution, 3D MM texture resolution, 3D texture acquisition period, and 3D face area scanned. 

7.81 3D MM shape [X/Y/Z] resolution 

Abstract values: Real (Decimal). 
Contents: The 3D MM shape X resolution, the 3D MM shape Y resolution, and the 3D MM shape Z resolution define the minimal distance acquired by the shape acquisition system in millimetres. These resolutions may be different compared with the MM texture resolution value. 

7.82 3D MM texture resolution 

Abstract values: Real (Decimal). 
Contents: The 3D MM texture resolution defines the minimal distance acquired by the texture acquisition system in mm. This resolution may be different compared with the 3D MM shape [X/Y/Z] resolution values. 

7.83 3D texture acquisition period 

Abstract values: Real (Decimal). 
Contents: The 3D texture acquisition period defines the time in milliseconds used for shape and texture acquisition. During this period neither the acquisition system nor the subject shall move or be moved. 

7.84 3D face area scanned block 

Abstract values: The value of this element shall be one or more of the following: 
  • Front of the head; 
  • Chin; 
  • Ears; 
  • Neck; 
  • Back of the head; 
  • Full head. 
Contents: The 3D face area scanned shall indicate the area scanned of the face. The minimum allowed 3D face area scanned is Front of the head. 

7.85 3D texture map block 

Abstract values: None. 
Contents: The 3D texture map block consists of the 3D texture map data, the Image data format, the 3D texture capture device spectral block, the 3D texture standard illuminant, and the 3D error map (See 7.70) elements. 
The 3D texture map block should only be used to store face texture data that is acquired by a scanning device during the 3D acquisition process, and therefore may have a different geometry than the 2D representation data stored in the same BDB. It is not a substitute for the 2D representation data. The 3D texture map shall be coded in 8 bit or 16 bit greyscale or as a 24 bit colour image. The length of the map is variable as it depends on the applied compression algorithm. 

7.86 3D texture capture device spectral block 

Abstract values: The value of this element shall be one of the following: 
  • unknown; 
  • other; 
  • white ( 380 nm 780 nm 380 nm 780 nm 380nm-780nm380 \mathrm{~nm}-780 \mathrm{~nm} ); 
  • very near infrared (photographic) ( 780 nm 1000 nm 780 nm 1000 nm 780nm-1000nm780 \mathrm{~nm}-1000 \mathrm{~nm} ); 
  • short wave infrared ( 1000 nm 1400 nm 1000 nm 1400 nm 1000nm-1400nm1000 \mathrm{~nm}-1400 \mathrm{~nm} ). 
Contents: The 3D texture capture device spectral block denotes the kind of spectrum that has been used for acquiring the 3D texture map. This spectrum may differ from the one used for 2D image representation data. 

ISO/IEC 39794-5:2019(E)

7.87 3D texture standard illuminant 

Abstract values: The value of this element shall be one of the following: 
  • D30; 
  • D35; 
  • D40; 
  • D45; 
  • D50; 
  • D55; 
  • D60; 
  • D65; 
  • D70; 
  • D75; 
  • D80. 
Contents: Illumination according to one of the standard illuminants defined in ISO 11664-2 or similar. 

7.88 3D texture map data 

Abstract values: Octet string. 
Contents: The 3D texture map data shall contain the face texture data acquired by a capture device during the 3D acquisition process. The 3D texture map data element shall have the format specified in the Image data format element. 

8 Encoding 

8.1 Overview 

The tagged binary encoding as well as the XML encoding is given in this clause and Annex A, respectively. In order to aid recognition of abstract values, the same lower camel-case notation is used for abstract data elements in the ASN. 1 module and in the XSD. The lower camel-case names are derived from the abstract values given here. 
The names of the ASN. 1 module and of the XML schema definition (available at http://standards.iso org/iso-iec/39794/-5/ed-1/en) are iso-iec-39794-5-ed-1-v1.asn and iso-iec-39794-5-ed-1-v1.xsd, respectively. 
Content and semantics of parameters of ISO/IEC 19794-5 (2011 edition) served as starting point for this document. The syntax has been modified to accommodate new requirements, and many parameters have been added allowing the encoding of many more properties of face images than before. 
Most of the face image data record parameters are considered as optional to allow application specific profiles and efficient storage of the available data. 
The 3D encoding types 3D point map and range image are not supported by this version of this document. 

8.2 Tagged binary encoding 

This clause specifies the ASN. 1 module implementing the abstract data elements specified in Clause 7. It describes the parameters of face image data as they are encoded in ASN.1. These ASN. 1 definitions are based on the following design decisions: 
  • The ASN. 1 types as defined in Clause A. 1 which encode the abstract data elements of Clause 7 shall conform to the ASN. 1 standard (ISO/IEC 8824-1) and to ISO/IEC 39794-1. 
  • The tagged binary encoding of face image data shall be obtained by applying the ASN. 1 distinguished encoding rules (DER) defined in ISO/IEC 8825-1 to a value of the type FaceImageDataBlock defined in the given ASN. 1 module. The DER encoding of each data object has three parts: tag octets that identify the data object, length octets that give the number of subsequent value octets, and the value octets. 
  • The ISO/IEC 39794 ASN. 1 modules are defined independently, i.e. no re-use or imports of definitions outside the ISO/IEC 39794 series area in order to avoid interdependencies to other standardization bodies even if this might be useful (e.g., considering X.509/PKIX definitions). 
  • Any face image data specific definition is fully included in the ASN. 1 module in this document, any re-usable header field that is defined in the ISO/IEC 39794-1 framework is part of the separate ISO/IEC 39794-1 ASN. 1 module. 
  • The entry point for any ISO/IEC 39794 series biometric type definition is the BiometricDataBlock defined in the ISO/IEC 39794-1 ASN. 1 module. This module includes the ASN. 1 definition of all modality specific parts of the ISO/IEC 39794 series. This allows modifying or extending both the generic header information and the supported set of biometric data types at one place without impacting the other parts of the ISO/IEC 39794 series. For example, the ISO/IEC 39794-1 ASN. 1 module includes the definitions of face image data and fingerprint image data and is extended later on by iris data. In this case, the ASN. 1 definitions of ISO/IEC 39794-4 and this document do not need to be modified. 
  • Extension markers are included in all data elements to ensure extensibility and forward/backward compatibility when new parameters need to be added to existing containers/blocks. 
  • The latest version of the ASN. 1 standard is employed, namely ISO/IEC 8824-1:2015. 
  • The distinguished encoding rules (DER) as specified in ISO/IEC 8825-1 are utilized to represent the data in binary format. Other options such as XML encoding rules shall not be used. The syntax of face image XML documents shall be based on the XML schema definition in A.2, not on the ASN. 1 module in A.1. 
The ASN. 1 module in A. 1 is available at http://standards.iso.org/iso-iec/39794/-5/ed-1/en. 
Additional explanations on the mapping between the specifications in Clause 7 and the ASN. 1 module given in A. 1 apply: 
  • The ASN. 1 schema does not guarantee that if all elements that could be contained in an element are absent, the whole element is absent too. 
  • If in the propertiesBlock element a property is set to TRUE, the respective property is present in the image. Otherwise if its set to FALSE, that property is absent in the image. If a property is omitted no statement has been made. 
  • If in the expressionBlock element one of the components is set to TRUE, the respective attribute is present in the image. Otherwise if its set to FALSE, it is absent in the image. If an element is omitted no statement has been made. The ASN. 1 schema does not prevent from choosing the expressions neutral and smile for the same face image. However, neutral and smile shall not be both true for the same image. 
  • At least one of the elements of the poseAngleBlock element shall be present; otherwise the whole poseAngleBlock element shall be absent. This requirement is not covered by the ASN. 1 schema. 

ISO/IEC 39794-5:2019(E)

  • MPEG4 feature points with the abstract name < 1 > . < 2 > < 1 > . < 2 > < 1 > . < 2 ><1>.<2> are encoded as mpeg4PointCode-<01>-<02>. AnthropometricLandmarkPointCode elements with the abstract name 1 . 2 1 . 2 (:1:).(:2:)\langle 1\rangle .\langle 2\rangle are encoded as pointCode-<01>-<02>. 
Encoding examples are contained in Annex B. 

8.3 XML encoding 

Annex A. 2 specifies an XSD schema, in which the abstract data elements of Clause 7 are constrained by XML types defined within one of the following standards: W3C Recommendations, XML Schema Parts 1 and 2, ISO/IEC 39794-1, or this document. 
Binary data shall only be encoded as base 64 and stored as a text string in an element, which itself has the underlying type of xs:base64Binary, for example: <xs:element name=“data” type=“xs:base64Binary”/> 
For avoidance of doubt other methods of encoding binary data such as xs:hexBinary or proprietary extensions which support binary data encoding (i. e.: XOP) are not permitted. 
Additional explanations on the mapping between the specifications in Clause 7 and the XSD given in A. 2 apply: 
  • The XML schema does not guarantee that if all elements that could be contained in an element are absent, the whole element is absent, too. 
  • If a property in a propertiesBlock element is set to TRUE, this property is present in the image. Otherwise if its set to FALSE, the property is absent in the image. If a property is omitted no statement has been made. 
  • If an expression in an expressionBlock element is set to TRUE, this expression is present in the image. Otherwise if its set to FALSE, the expression is absent in the image. If an expression is omitted no statement has been made. 
  • The XML schema does not prevent from choosing the expressions neutral and smile for the same face image. However, neutral and smile shall not be both true for the same image. 
  • At least one of the elements of the poseAngleBlock element shall be present; otherwise the whole poseAngleBlock element shall be absent. This requirement is not covered by the XML schema. 
  • MPEG4 feature points with the abstract name < 1 > . < 2 > < 1 > . < 2 > < 1 > . < 2 ><1>.<2> are encoded as MPEG4PointCode-<01>-<02>. 
  • AnthropometricLandmarkPointCode elements with the abstract name 1 . 2 1 . 2 (:1:).(:2:)\langle 1\rangle .\langle 2\rangle are encoded as PointCode-<01>-<02>. 
The XSD module in A. 2 can be retrieved from http://standards.iso.org/iso-iec/39794/-5/ed-1/en. 
Encoding examples are contained in Annex B. 

9 Registered BDB format identifiers 

The registrations listed in Table 8 have been made in accordance with ISO/IEC 19785 (all parts) [ 31 ] [ 31 ] ^([31]){ }^{[31]} to identify the face image data interchange formats defined in this document. The format owner is ISO/ IEC JTC 1/SC 37 with the registered biometric organization identifier 257 (0101Hex). 
Table 8 - BDB format identifiers 
BDB format identifier  Short name  Full object identifier 
42 (002AHex)  g3-binary-face-image  { iso(1) registration-authority(1) cbeff(19785) biometricorganization(0) jtc1-sc37(257) bdbs(0) g3-binary-face-image(42) } 
43 (002BHex)  g3-xml-faceimage  { iso(1) registration-authority(1) cbeff(19785) biometricorganization(0) jtc1-sc37(257) bdbs(0) g3-xml-face-image(43) } 
BDB format identifier Short name Full object identifier 42 (002AHex) g3-binary-face-image { iso(1) registration-authority(1) cbeff(19785) biometricorganization(0) jtc1-sc37(257) bdbs(0) g3-binary-face-image(42) } 43 (002BHex) g3-xml-faceimage { iso(1) registration-authority(1) cbeff(19785) biometricorganization(0) jtc1-sc37(257) bdbs(0) g3-xml-face-image(43) }| BDB format identifier | Short name | Full object identifier | | :--- | :--- | :--- | | 42 (002AHex) | g3-binary-face-image | { iso(1) registration-authority(1) cbeff(19785) biometricorganization(0) jtc1-sc37(257) bdbs(0) g3-binary-face-image(42) } | | 43 (002BHex) | g3-xml-faceimage | { iso(1) registration-authority(1) cbeff(19785) biometricorganization(0) jtc1-sc37(257) bdbs(0) g3-xml-face-image(43) } |

Annex A (normative) 

Formal specifications 

A. 1 ASN. 1 module for tagged binary encoding 

This ASN. 1 module is available at http://standards.iso.org/iso-iec/39794/-5/ed-1/en 
ISO-IEC-39794-5-ed-1-v1 {iso(1) standard(0) iso-iec-39794(39794) part-5(5) ed-1(1) v1(1)
iso-iec-39794-5(0) }
-- Use of ISO/IEC copyright in this Schema is licensed for the purpose of
-- developing, implementing, and using software based on this Schema, subject
-- to the following conditions:
--
-- * Software developed from this Schema must retain the Copyright Notice,
-- this list of conditions and the disclaimer below ("Disclaimer").
--
-- * Neither the name or logo of ISO or of IEC, nor the names of specific
-- contributors, may be used to endorse or promote software derived from
-- this Schema without specific prior written permission.
--
-- * The software developer shall attribute the Schema to ISO/IEC and
-- identify the ISO/IEC standard from which it is taken. Such attribution
-- (e.g., "This software makes use of the Schema from ISO/IEC 39794-5
-- within modifications permitted in the relevant ISO/IEC standard.
-- Please reproduce this note if possible."), may be placed in the
-- software itself or any other reasonable location.
-- The Disclaimer is:
-- THE SCHEMA ON WHICH THIS SOFTWARE IS BASED IS PROVIDED BY THE COPYRIGHT
-- HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
-- INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
-- AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
-- THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-- INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
-- NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-- DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-- THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-- (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
-- THE CODE COMPONENTS, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
DEFINITIONS IMPLICIT TAGS ::= BEGIN
    IMPORTS
        VersionBlock,
        CaptureDateTimeBlock,
        QualityBlocks,
        PADDataBlock,
        CoordinateCartesian2DUnsignedShortBlock,
        CoordinateCartesian3DUnsignedShortBlock,
        RegistryIdBlock,
        CertificationIdBlocks
    FROM ISO-IEC-39794-1-ed-1-v1;
    FaceImageDataBlock ::= [APPLICATION 5] SEQUENCE {
        versionBlock [0] VersionBlock,
        representationBlocks [1] RepresentationBlocks,
    }
RepresentationBlocks ::= SEQUENCE OF RepresentationBlock
RepresentationBlock ::= SEQUENCE {
    representationId [0] INTEGER (0..MAX),
    imageRepresentation [1] ImageRepresentation,
    captureDateTimeBlock [2] CaptureDateTimeBlock OPTIONAL,
    qualityBlocks [3] QualityBlocks OPTIONAL,
    padDataBlock [4] PADDataBlock OPTIONAL,
    sessionId [5] INTEGER (0..MAX) OPTIONAL,
    derivedFrom [6] INTEGER (0..MAX) OPTIONAL,
    captureDeviceBlock [7] CaptureDeviceBlock OPTIONAL,
    identityMetadataBlock [8] IdentityMetadataBlock OPTIONAL,
    landmarkBlocks [9] LandmarkBlocks OPTIONAL,
}
CaptureDeviceBlock ::= SEQUENCE {
    modelIdBlock [0] RegistryIdBlock OPTIONAL,
    certificationIdBlocks [1] CertificationIdBlocks OPTIONAL,
}
IdentityMetadataBlock ::= SEQUENCE {
    gender [0] Gender OPTIONAL,
    eyeColour [1] EyeColour OPTIONAL,
    hairColour [2] HairColour OPTIONAL,
    subjectHeight [3] SubjectHeight OPTIONAL,
    propertiesBlock [4] PropertiesBlock OPTIONAL,
    expressionBlock [5] ExpressionBlock OPTIONAL,
    poseAngleBlock [6] PoseAngleBlock OPTIONAL,
}
GenderCode ::= ENUMERATED {
    unknown (0),
    other (1),
    male (2),
    female (3)
}
GenderExtensionBlock ::= SEQUENCE {
    fallback [0] GenderCode,
}
Gender ::= CHOICE {
    code [0] GenderCode,
    extensionBlock [1] GenderExtensionBlock
}
EyeColourCode ::= ENUMERATED {
    unknown (0),
    other (1),
    black (2),
    blue (3),
    brown (4),
    grey (5),
    green (6),
    hazel (7),
    multi-coloured (8),
    pink (9)
}
EyeColourExtensionBlock ::= SEQUENCE {
    fallback [0] EyeColourCode,
}
EyeColour ::= CHOICE {
    code [0] EyeColourCode,
    extensionBlock [1] EyeColourExtensionBlock

ISO/IEC 39794-5:2019(E)

}
HairColourCode ::= ENUMERATED
    unknown (0),
    other (1),
    bald (2),
    black (3),
    blonde (4),
    brown (5),
    grey (6),
    white (7),
    red (8),
    knownColoured (9)
}
HairColourExtensionBlock ::= SEQUENCE {
    fallback [0] HairColourCode,
}
HairColour ::= CHOICE {
    code [0] HairColourCode,
    extensionBlock [1] HairColourExtensionBlock
}
SubjectHeight ::= INTEGER (1..65535)
PropertiesBlock ::= SEQUENCE {
    glasses [0] BOOLEAN OPTIONAL,
    moustache [1] BOOLEAN OPTIONAL,
    beard [2] BOOLEAN OPTIONAL,
    teethVisible [3] BOOLEAN OPTIONAL,
    pupilOrIrisNotVisible [4] BOOLEAN OPTIONAL,
    mouthOpen [5] BOOLEAN OPTIONAL,
    leftEyePatch [6] BOOLEAN OPTIONAL,
    rightEyePatch [7] BOOLEAN OPTIONAL,
    darkGlasses [8] BOOLEAN OPTIONAL,
    biometricAbsent [9] BOOLEAN OPTIONAL,
    headCoveringsPresent [10] BOOLEAN OPTIONAL,
}
ExpressionBlock ::= SEQUENCE {
    neutral [0] BOOLEAN OPTIONAL,
    smile [1] BOOLEAN OPTIONAL,
    raisedEyebrows [2] BOOLEAN OPTIONAL,
    eyesLookingAwayFromTheCamera [3] BOOLEAN OPTIONAL,
    squinting [4] BOOLEAN OPTIONAL,
    frowning [5] BOOLEAN OPTIONAL,
}
PoseAngleBlock ::= SEQUENCE {
    yawAngleBlock [0] AngleDataBlock OPTIONAL,
    pitchAngleBlock [1] AngleDataBlock OPTIONAL,
    rollAngleBlock [2] AngleDataBlock OPTIONAL
}
AngleDataBlock ::= SEQUENCE {
    angleValue [0] AngleValue,
    angleUncertainty [1] AngleUncertainty OPTIONAL,
}
AngleValue ::= INTEGER (-180..180)
AngleUncertainty ::= INTEGER (0..180)
LandmarkBlocks ::= SEQUENCE OF LandmarkBlock
LandmarkBlock ::= SEQUENCE {
    landmarkKind [0] LandmarkKind,
    landmarkCoordinates [1] LandmarkCoordinates OPTIONAL,
}
LandmarkKind ::= CHOICE {
    base [0] LandmarkKindBase,
    extensionBlock [1] LandmarkKindExtensionBlock
}
LandmarkKindBase ::= CHOICE {
    mpeg4FeaturePoint [0] MPEG4FeaturePoint,
    anthropometricLandmark [1] AnthropometricLandmark
}
LandmarkKindExtensionBlock ::= SEQUENCE {
}
MPEG4FeaturePointCode ::= ENUMERATED {
    mpeg4PointCode-02-01 (0),
    mpeg4PointCode-02-02 (1),
    mpeg4PointCode-02-03 (2),
    mpeg4PointCode-02-04 (3),
    mpeg4PointCode-02-05 (4),
    mpeg4PointCode-02-06 (5),
    mpeg4PointCode-02-07 (6),
    mpeg4PointCode-02-08 (7),
    mpeg4PointCode-02-09 (8),
    mpeg4PointCode-02-10 (9),
    mpeg4PointCode-02-11 (10),
    mpeg4PointCode-02-12 (11),
    mpeg4PointCode-02-13 (12),
    mpeg4PointCode-02-14 (13),
    mpeg4PointCode-03-01 (14),
    mpeg4PointCode-03-02 (15),
    mpeg4PointCode-03-03 (16),
    mpeg4PointCode-03-04 (17),
    mpeg4PointCode-03-05 (18),
    mpeg4PointCode-03-06 (19),
    mpeg4PointCode-03-07 (20),
    mpeg4PointCode-03-08 (21),
    mpeg4PointCode-03-09 (22),
    mpeg4PointCode-03-10 (23),
    mpeg4PointCode-03-11 (24),
    mpeg4PointCode-03-12 (25),
    mpeg4PointCode-03-13 (26),
    mpeg4PointCode-03-14 (27),
    mpeg4PointCode-04-01 (28),
    mpeg4PointCode-04-02 (29),
    mpeg4PointCode-04-03 (30),
    mpeg4PointCode-04-04 (31),
    mpeg4PointCode-04-05 (32),
    mpeg4PointCode-04-06 (33),
    mpeg4PointCode-05-01 (34),
    mpeg4PointCode-05-02 (35),
    mpeg4PointCode-05-03 (36),
    mpeg4PointCode-05-04 (37),
    mpeg4PointCode-06-01 (38),
    mpeg4PointCode-06-02 (39),
    mpeg4PointCode-06-03 (40),
    mpeg4PointCode-06-04 (41),
    mpeg4PointCode-07-01 (42),
    mpeg4PointCode-08-01 (43),
    mpeg4PointCode-08-02 (44),
    mpeg4PointCode-08-03 (45),
    mpeg4PointCode-08-04 (46),
    mpeg4PointCode-08-05 (47),
    mpeg4PointCode-08-06 (48),
    mpeg4PointCode-08-07 (49),
    mpeg4PointCode-08-08 (50),

ISO/IEC 39794-5:2019(E)

    mpeg4PointCode-08-09 (51),
    mpeg4PointCode-08-10 (52),
    mpeg4PointCode-09-01 (53),
    mpeg4PointCode-09-02 (54),
    mpeg4PointCode-09-03 (55),
    mpeg4PointCode-09-04 (56),
    mpeg4PointCode-09-05 (57),
    mpeg4PointCode-09-06 (58),
    mpeg4PointCode-09-07 (59),
    mpeg4PointCode-09-08 (60),
    mpeg4PointCode-09-09 (61),
    mpeg4PointCode-09-10 (62),
    mpeg4PointCode-09-11 (63),
    mpeg4PointCode-09-12 (64),
    mpeg4PointCode-09-13 (65),
    mpeg4PointCode-09-14 (66),
    mpeg4PointCode-09-15 (67),
    mpeg4PointCode-10-01 (68),
    mpeg4PointCode-10-02 (69),
    mpeg4PointCode-10-03 (70),
    mpeg4PointCode-10-04 (71),
    mpeg4PointCode-10-05 (72),
    mpeg4PointCode-10-06 (73),
    mpeg4PointCode-10-07 (74),
    mpeg4PointCode-10-08 (75),
    mpeg4PointCode-10-09 (76),
    mpeg4PointCode-10-10 (77),
    mpeg4PointCode-11-01 (78),
    mpeg4PointCode-11-02 (79),
    mpeg4PointCode-11-03 (80),
    mpeg4PointCode-11-04 (81),
    mpeg4PointCode-11-05 (82),
    mpeg4PointCode-11-06 (83),
    mpeg4PointCode-12-01 (84),
    mpeg4PointCode-12-02 (85),
    mpeg4PointCode-12-03 (86),
    mpeg4PointCode-12-04 (87)
}
MPEG4FeaturePointExtensionBlock ::= SEQUENCE {
    fallback [0] MPEG4FeaturePointCode,
}
MPEG4FeaturePoint ::= CHOICE {
    code [0] MPEG4FeaturePointCode,
    extensionBlock [1] MPEG4FeaturePointExtensionBlock
}
AnthropometricLandmark ::= CHOICE {
    base [0] AnthropometricLandmarkBase,
    extensionBlock [1] AnthropometricLandmarkExtensionBlock
}
AnthropometricLandmarkBase ::= CHOICE {
    anthropometricLandmarkName [0] AnthropometricLandmarkName,
    anthropometricLandmarkPointName [1] AnthropometricLandmarkPointName,
    anthropometricLandmarkPointId [2] AnthropometricLandmarkPointId
}
AnthropometricLandmarkExtensionBlock ::= SEQUENCE {
}
AnthropometricLandmarkNameCode ::= ENUMERATED {
    vertex (0),
    glabella (1),
    opisthocranion (2),
    eurionLeft (3),
    eurionRight (4),
    frontotemporaleLeft (5),
    frontotemporaleRight (6),
    trichion (7),
    zygionLeft (8),
    zygionRight (9),
    gonionLeft (10),
    gonionRight (11),
    sublabiale (12),
    pogonion (13),
    menton (14),
    condylionLateraleLeft (15),
    condylionLateraleRight (16),
    endocanthionLeft (17),
    endocanthionRight (18),
    exocanthionLeft (19),
    exocanthionRight (20),
    centerPointOfPupilLeft (21),
    centerPointOfPupilRight (22),
    orbitaleLeft (23),
    orbitaleRight (24),
    palpebraleSuperiusLeft (25),
    palpebraleSuperiusRight (26),
    palpebraleInferiusLeft (27),
    palpebraleInferiusRight (28),
    orbitaleSuperiusRight (30),
    superciliareLeft (31),
    superciliareRight (32),}
    sellion (34),
    alareLeft (35),
    alareRight (36),
    pronasale (37),
    subnasale (38),}\begin{array}{l}{\mathrm{ (39),}}\\{\mathrm{ subalare }}
    alarCurvatureLeft (40),
    alarCurvatureRight (41),
    maxillofrontale (42),
    christaPhiltraLandmarkLeft (43),
    christaPhiltraLandmarkRight (44),
    labialeSuperius (45),
    labialeInferius (46),
    cheilionLeft (47),
    cheilionRight (48),
    stomion (49),
    superauraleLeft (50),
    superauraleRight (51),
    subauraleLeft (52),
    subauraleRight (53),
    preaurale (54),
    postaurale (55),
    otobasionSuperiusLeft (56),
    otobasionSuperiusRight (57),
    otobasionInferius (58),
    porion (59),
    tragion (60)
}
AnthropometricLandmarkNameExtensionBlock ::= SEQUENCE {
    fallback [0] AnthropometricLandmarkNameCode,
}
AnthropometricLandmarkName ::= CHOICE {
    code [0] AnthropometricLandmarkNameCode,
    extensionBlock [1] AnthropometricLandmarkNameExtensionBlock
}
AnthropometricLandmarkPointNameCode ::= ENUMERATED {
    pointCode-01-01 (0),
    pointCode-01-02 (1),
    pointCode-01-05 (2),

ISO/IEC 39794-5:2019(E)

    pointCode-01-06 (3),
    pointCode-01-07 (4),
    pointCode-01-08 (5),
    pointCode-01-09 (6),
    pointCode-02-01 (7),
    pointCode-02-02 (8),
    pointCode-02-03 (9),
    pointCode-02-04 (10),
    pointCode-02-05 (11),
    pointCode-02-06 (12),
    pointCode-02-07 (13),
    pointCode-02-09 (14),
    pointCode-02-10 (15),
    pointCode-03-01 (16),
    pointCode-03-02 (17),
    pointCode-03-03 (18),
    pointCode-03-04 (19),
    pointCode-03-05 (20),
    pointCode-03-06 (21),
    pointCode-03-07 (22),
    pointCode-03-08 (23),
    pointCode-03-09 (24),
    pointCode-03-10 (25),
    pointCode-03-11 (26),
    pointCode-03-12 (27),
    pointCode-04-01 (28),
    pointCode-04-02 (29),
    pointCode-04-03 (30),
    pointCode-04-04 (31),
    pointCode-05-01 (32),
    pointCode-05-02 (33),
    pointCode-05-03 (34),
    pointCode-05-04 (35),
    pointCode-05-06 (36)
}
AnthropometricLandmarkPointNameExtensionBlock ::= SEQUENCE {
    fallback [0] AnthropometricLandmarkPointNameCode,
}
AnthropometricLandmarkPointName ::= CHOICE {
    code [0] AnthropometricLandmarkPointNameCode,
    extensionBlock [1] AnthropometricLandmarkPointNameExtensionBlock
}
AnthropometricLandmarkPointIdCode ::= ENUMERATED {
    v
    (0),
    (1),
    eu-left
    eu-right (4),
    ft-left (5),
    ft-right (6),
    tr (7),
    zy-left (8),
    zy-right (9),
    go-left (10),
    go-right (11),
    sl (12),
    pg (13),
    gn (14),
    cdl-left (15),
    cdl-right (16),
    en-left (17),
    en-right (18),
    ex-left (19),
    ex-right (20),
    p-left (21),
    p-right (22),
    or-left (23),
    or-right (24),
    ps-left (25),
    ps-right (26),
    pi-left (27),
    pi-right (28),
    os-left (29),
    os-right (30),
    sci-left (31),
    sci-right (32),
    n (33),
    se (34),
    al-left (35),
    al-right (36),
    prn (37),
    sn (38),
    sbal (39),
    ac-left (40),
    ac-right (41),
    mf-left (42),
    mf-right (43),
    cph-left (44),
    cph-right (45),
    ls (46),
    li ch-left (48),
    ch-right (49),
    sto (50),
    sa-left (51),
    sa-right (52),
    sba-left (53),
    sba-right (54),
    pra-left (55),
    pra-right (56),
    pa (57),
    obs-left (58),
    obs-right (59),
    obi (60),
    po (61),
    t (62)
}
AnthropometricLandmarkPointIdExtensionBlock ::= SEQUENCE {
    fallback [0] AnthropometricLandmarkPointIdCode,
}
AnthropometricLandmarkPointId ::= CHOICE {
    code [0] AnthropometricLandmarkPointIdCode,
    extensionBlock [1] AnthropometricLandmarkPointIdExtensionBlock
}
LandmarkCoordinates ::= CHOICE {
    base [0] LandmarkCoordinatesBase,
    extensionBlock [1] LandmarkCoordinatesExtensionBlock
}
LandmarkCoordinatesBase ::= CHOICE {
    coordinateCartesian2DBlock [0] CoordinateCartesian2DUnsignedShortBlock,
    coordinateTextureImageBlock [1] CoordinateTextureImageBlock,
    coordinateCartesian3DBlock [2] CoordinateCartesian3DUnsignedShortBlock
}
LandmarkCoordinatesExtensionBlock ::= SEQUENCE {
}
CoordinateTextureImageBlock ::= SEQUENCE {
    uInPixel [0] INTEGER (0..MAX),
    vInPixel [1] INTEGER (0..MAX)
}

ISO/IEC 39794-5:2019(E)

ImageRepresentation ::= CHOICE {
    base [0] ImageRepresentationBase,
    extensionBlock [1] ImageRepresentationExtensionBlock
}
ImageRepresentationBase ::= CHOICE {
    imageRepresentation2DBlock [0] ImageRepresentation2DBlock,
    shapeRepresentation3DBlock [1] ShapeRepresentation3DBlock
}
ImageRepresentationExtensionBlock ::= SEQUENCE
}
ImageRepresentation2DBlock ::= SEQUENCE {
    representationData2D [0] OCTET STRING,
    imageInformation2DBlock [1] ImageInformation2DBlock,
    captureDevice2DBlock [2] CaptureDevice2DBlock OPTIONAL,
}
CaptureDevice2DBlock ::= SEQUENCE {
    captureDeviceSpectral2DBlock [0] CaptureDeviceSpectral2DBlock OPTIONAL,
    captureDeviceTechnologyId2D [1] CaptureDeviceTechnologyId2D OPTIONAL,
}
CaptureDeviceSpectral2DBlock ::= SEQUENCE {
    whiteLight [0] BOOLEAN OPTIONAL,
    nearInfrared [1] BOOLEAN OPTIONAL,
    thermal [2] BOOLEAN OPTIONAL,
}
CaptureDeviceTechnologyId2DCode ::= ENUMERATED {
    unknown (0),
    staticPhotographFromUnknownSource (1),
    staticPhotographFromDigitalStillImageCamera (2),
    staticPhotographFromScanner (3),
    videoFrameFromUnknownSource (4),
    videoFrameFromAnalogueVideoCamera (5),
    videoFrameFromDigitalVideoCamera (6)
}
CaptureDeviceTechnologyId2DExtensionBlock ::= SEQUENCE {
    fallback [0] CaptureDeviceTechnologyId2DCode,
}
CaptureDeviceTechnologyId2D ::= CHOICE {
    code [0] CaptureDeviceTechnologyId2DCode,
    extensionBlock [1] CaptureDeviceTechnologyId2DExtensionBlock
}
ImageInformation2DBlock ::= SEQUENCE {
    imageDataFormat [0] ImageDataFormat,
    faceImageKind2D [1] FaceImageKind2D OPTIONAL,
    postAcquisitionProcessingBlock [2] PostAcquisitionProcessingBlock OPTIONAL,
    lossyTransformationAttempts [3] LossyTransformationAttempts OPTIONAL,
    cameraToSubjectDistance [4] CameraToSubjectDistance OPTIONAL,
    sensorDiagonal [5] SensorDiagonal OPTIONAL,
    lensFocalLength [6] LensFocalLength OPTIONAL,
    imageSizeBlock [7] ImageSizeBlock OPTIONAL,
    imageFaceMeasurementsBlock [8] ImageFaceMeasurementsBlock OPTIONAL,
    imageColourSpace [9] ImageColourSpace OPTIONAL,
    referenceColourMappingBlock [10] ReferenceColourMappingBlock OPTIONAL,
}
FaceImageKind2DCode ::= ENUMERATED {
    mrtd (0),
    generalPurpose (1)
}
FaceImageKind2DExtensionBlock ::= SEQUENCE {
    fallback [0] FaceImageKind2DCode,
}
FaceImageKind2D ::= CHOICE {
    code [0] FaceImageKind2DCode,
    extensionBlock [1] FaceImageKind2DExtensionBlock
}
PostAcquisitionProcessingBlock ::= SEQUENCE {
    rotated [0] BOOLEAN OPTIONAL,
    cropped [1] BOOLEAN OPTIONAL,
    downSampled [2] BOOLEAN OPTIONAL,
    whiteBalanceAdjusted [3] BOOLEAN OPTIONAL,
    multiplyCompressed [4] BOOLEAN OPTIONAL,
    interpolated [5] BOOLEAN OPTIONAL,
    contrastStretched [6] BOOLEAN OPTIONAL,
    poseCorrected [7] BOOLEAN OPTIONAL,
    multiViewImage [8] BOOLEAN OPTIONAL,
    ageProgressed [9] BOOLEAN OPTIONAL,
    superResolutionProcessed [10] BOOLEAN OPTIONAL,
    normalised [11] BOOLEAN OPTIONAL,
}
LossyTransformationAttemptsCode ::= ENUMERATED {
    unknown (0),
    zero (1),
    one (2),
    moreThanOne (3)
}
LossyTransformationAttemptsExtensionBlock ::= SEQUENCE {
    fallback [0] LossyTransformationAttemptsCode,
}
LossyTransformationAttempts ::= CHOICE {
    code [0] LossyTransformationAttemptsCode,
    extensionBlock [1] LossyTransformationAttemptsExtensionBlock
}
ImageDataFormatCode ::= ENUMERATED {
    unknown (0),
    other (1),
    jpeg (2),
    jpeg2000Lossy (3),
    jpeg2000Lossless (4),
    png (5),
    pgm (6),
    ppm (7)
}
ImageDataFormatExtensionBlock ::= SEQUENCE {
}
ImageDataFormat ::= CHOICE {
    code [0] ImageDataFormatCode,
    extensionBlock [1] ImageDataFormatExtensionBlock
}
CameraToSubjectDistance ::= INTEGER (0..50000)
SensorDiagonal ::= INTEGER (0..2000)
LensFocalLength ::= INTEGER (0..2000)

ISO/IEC 39794-5:2019(E)

    ImageSizeBlock ::= SEQUENCE {
        width [0] ImageSize,
        height [1] ImageSize
    }
    ImageSize ::= INTEGER (0..65535)
    ImageFaceMeasurementsBlock ::= SEQUENCE {
        imageHeadWidth [0] INTEGER (0..MAX) OPTIONAL,
        imageInterEyeDistance [1] INTEGER (0..MAX) OPTIONAL,
        imageEyeToMouthDistance [2] INTEGER (0..MAX) OPTIONAL,
        imageHeadLength [3] INTEGER (0..MAX) OPTIONAL,
    }
    ImageColourSpaceCode ::= ENUMERATED {
        unknown (0),
        other (1),
        rgb24Bit (2),
        rgb48Bit (3),
        yuv422 (4),
        greyscale8Bit (5),
        greyscale16Bit (6)
    }
    ImageColourSpaceExtensionBlock ::= SEQUENCE {
        fallback [0] ImageColourSpaceCode,
    }
    ImageColourSpace ::= CHOICE {
        code [0] ImageColourSpaceCode,
        extensionBlock [1] ImageColourSpaceExtensionBlock
    }
    ReferenceColourMappingBlock ::= SEQUENCE {
        referenceColourSchema [0] OCTET STRING OPTIONAL,
        referenceColourDefinitionAndValueBlocks [1] ReferenceColourDefinitionAndValueBlocks
OPTIONAL,
    }
    ReferenceColourDefinitionAndValueBlocks ::= SEQUENCE OF ReferenceColourDefinitionAndVal
ueBlock
    ReferenceColourDefinitionAndValueBlock ::= SEQUENCE {
        referenceColourDefinition [0] OCTET STRING OPTIONAL,
        referenceColourValue [1] OCTET STRING OPTIONAL,
    }
    ShapeRepresentation3DBlock ::= SEQUENCE {
        representationData3D [0] OCTET STRING,
        imageInformation3DBlock [1] ImageInformation3DBlock,
        captureDevice3DBlock [2] CaptureDevice3DBlock OPTIONAL,
    }
    CaptureDevice3DBlock ::= SEQUENCE {
        modus3D [0] Modus3D OPTIONAL,
        captureDeviceTechnologyId3D [1] CaptureDeviceTechnologyId3D OPTIONAL,
    }
    Modus3DCode ::= ENUMERATED {
        unknown (0),
        active (1),
        passive (2)
    }
Modus3DExtensionBlock ::= SEQUENCE {
    fallback [0] Modus3DCode,
}
Modus3D ::= CHOICE {
    code [0] Modus3DCode,
    extensionBlock [1] Modus3DExtensionBlock
}
CaptureDeviceTechnologyId3DCode ::= ENUMERATED {
    unknown (0),
    stereoscopicScanner (1),
    movingLaserLine (2),
    structuredLight (3),
    colourCodedLight (4),
    timeOfFlight (5),
    shapeFromShading (6)
}
CaptureDeviceTechnologyId3DExtensionBlock ::= SEQUENCE {
    fallback [0] CaptureDeviceTechnologyId3DCode,
}
CaptureDeviceTechnologyId3D ::= CHOICE {
    code [0] CaptureDeviceTechnologyId3DCode,
    extensionBlock [1] CaptureDeviceTechnologyId3DExtensionBlock
}
ImageInformation3DBlock ::= SEQUENCE {
    representationKind3D [0] RepresentationKind3D,
    coordinateSystem3D [1] CoordinateSystem3D,
    cartesianScalesAndOffsets3DBlock [2] CartesianScalesAndOffsets3DBlock,
    imageColourSpace [3] ImageColourSpace OPTIONAL,
    faceImageKind3D [4] FaceImageKind3D OPTIONAL,
    imageSizeBlock [5] ImageSizeBlock OPTIONAL,
    physicalFaceMeasurements3DBlock [6] PhysicalFaceMeasurements3DBlock OPTIONAL,
    postAcquisitionProcessingBlock [7] PostAcquisitionProcessingBlock OPTIONAL,
    texturedImageResolution3DBlock [8] TexturedImageResolution3DBlock OPTIONAL,
    textureMap3DBlock [9] TextureMap3DBlock OPTIONAL,
}
RepresentationKind3D ::= CHOICE {
    base [0] RepresentationKind3DBase,
    extensionBlock [1] RepresentationKind3DExtensionBlock
}
RepresentationKind3DBase ::= CHOICE {
    vertex3DBlock [0] Vertex3DBlock
}
RepresentationKind3DExtensionBlock ::= SEQUENCE {
}
Vertex3DBlock ::= SEQUENCE {
    vertexInformation3DBlocks [0] VertexInformation3DBlocks OPTIONAL,
    vertexTriangleData3DBlocks [1] VertexTriangleData3DBlocks OPTIONAL,
}
VertexInformation3DBlocks ::= SEQUENCE OF VertexInformation3DBlock
VertexInformation3DBlock ::= SEQUENCE {
    vertexCoordinates3DBlock [0] CoordinateCartesian3DUnsignedShortBlock,
    vertexId3D [1] INTEGER (0..MAX) OPTIONAL,
    vertexNormals3DBlock [2] CoordinateCartesian3DUnsignedShortBlock OPTIONAL,
    vertexTextures3DBlock [3] CoordinateCartesian2DUnsignedShortBlock OPTIONAL,
    errorMap3D [4] OCTET STRING OPTIONAL,

ISO/IEC 39794-5:2019(E)

}
VertexTriangleData3DBlocks ::= SEQUENCE OF VertexTriangleData3DBlock
VertexTriangleData3DBlock ::= SEQUENCE {
    triangleIndex1 [0] INTEGER (0..MAX),
    triangleIndex2 [1] INTEGER (0..MAX),
    triangleIndex3 [2] INTEGER (0..MAX)
}
CoordinateSystem3DCode ::= ENUMERATED {
    cartesianCoordinateSystem3D (0)
}
CoordinateSystem3DExtensionBlock ::= SEQUENCE {
    fallback [0] CoordinateSystem3DCode,
}
CoordinateSystem3D ::= CHOICE {
    code [0] CoordinateSystem3DCode,
    extensionBlock [1] CoordinateSystem3DExtensionBlock
}
CartesianScalesAndOffsets3DBlock ::= SEQUENCE {
    scaleX [0] REAL,
    scaleY [1] REAL,
    scaleZ [2] REAL,
    offsetX [3] REAL,
    offsetY [4] REAL,
    offsetZ [5] REAL
}
FaceImageKind3DCode ::= ENUMERATED {
    texturedFaceImage3d (0)
}
FaceImageKind3DExtensionBlock ::= SEQUENCE {
    fallback [0] FaceImageKind3DCode,
}
FaceImageKind3D ::= CHOICE {
    code [0] FaceImageKind3DCode,
    extensionBlock [1] FaceImageKind3DExtensionBlock
}
PhysicalFaceMeasurements3DBlock ::= SEQUENCE {
    physicalHeadWidth3D [0] INTEGER OPTIONAL,
    physicalInterEyeDistance3D [1] INTEGER OPTIONAL,
    physicalEyeToMouthDistance3D [2] INTEGER OPTIONAL,
    physicalHeadLength3D [3] INTEGER OPTIONAL,
}
TexturedImageResolution3DBlock ::= SEQUENCE {
    mMShapeXResolution3D [0] REAL OPTIONAL,
    mMShapeYResolution3D [1] REAL OPTIONAL,
    mMShapeZResolution3D [2] REAL OPTIONAL,
    mMTextureResolution3D [3] REAL OPTIONAL,
    textureAcquisitionPeriod3D [4] REAL OPTIONAL,
    faceAreaScanned3DBlock [5] FaceAreaScanned3DBlock OPTIONAL,
}
FaceAreaScanned3DBlock ::= SEQUENCE {
    frontOfTheHead [0] BOOLEAN OPTIONAL,
    chin [1] BOOLEAN OPTIONAL,
    ears [2] BOOLEAN OPTIONAL,
    neck [3] BOOLEAN OPTIONAL,
    backOfTheHead [4] BOOLEAN OPTIONAL,
    fullHead [5] BOOLEAN OPTIONAL,
}
TextureMap3DBlock ::= SEQUENCE {
    textureMapData3D [0] OCTET STRING,
    imageDataFormat [1] ImageDataFormat,
    textureCaptureDeviceSpectral3D [2] TextureCaptureDeviceSpectral3D OPTIONAL,
    textureStandardIlluminant3D [3] TextureStandardIlluminant3D OPTIONAL,
    errorMap3D [4] OCTET STRING OPTIONAL,
}
TextureCaptureDeviceSpectral3DCode ::= ENUMERATED {
    unknown (0),
    other (1),
    white (2),
    veryNearInfrared (3),
    shortWaveInfrared (4)
}
TextureCaptureDeviceSpectral3DExtensionBlock ::= SEQUENCE {
    fallback [0] TextureCaptureDeviceSpectral3DCode,
}
TextureCaptureDeviceSpectral3D ::= CHOICE {
    code [0] TextureCaptureDeviceSpectral3DCode,
    extensionBlock [1] TextureCaptureDeviceSpectral3DExtensionBlock
}
TextureStandardIlluminant3DCode ::= ENUMERATED {
    d30 (0),
    d35 (1),
    d40 (2),
    d45 (3),
    d50 (4),
    d55 (5),
    d60 (6),
    d65 (7),
    d70 (8),
    d75 (9),
    d80 (10)
}
TextureStandardIlluminant3DExtensionBlock ::= SEQUENCE {
    fallback [0] TextureStandardIlluminant3DCode,
}
TextureStandardIlluminant3D ::= CHOICE {
    code [0] TextureStandardIlluminant3DCode,
    extensionBlock [1] TextureStandardIlluminant3DExtensionBlock
}
END 

A. 2 XML schema definition for XML encoding 

The XSD module below can be retrieved from http://standards.iso.org/iso-iec/39794/-5/ed-1/en 
<?xml version="1.0" encoding="utf-8"?>
<!--Use of ISO/IEC copyright in this Schema is licensed for the purpose of developing,
implementing, and using software based on this Schema, subject to the following
conditions:
* Software developed from this Schema must retain the Copyright Notice, this list of
conditions and the disclaimer below ("Disclaimer").

ISO/IEC 39794-5:2019(E)

  • Neither the name or logo of ISO or of IEC, nor the names of specific contributors, may be used to endorse or promote software derived from this Schema without specific prior written permission. 
  • The software developer shall attribute the Schema to ISO/IEC and identify the ISO/IEC standard from which it is taken. Such attribution (e.g., “This software makes use of the Schema from ISO/IEC 39794-5 within modifications permitted in the relevant ISO/IEC standard. Please reproduce this note if possible.”), may be placed in the software itself or any other reasonable location. 
The Disclaimer is: 
THE SCHEMA ON WHICH THIS SOFTWARE IS BASED IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THE CODE COMPONENTS, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.–> 
<xs:schema 
xmlns:xs=“http://www.w3.org/2001/XMLSchema 
xmlns:vc=“http://www.w3.org/2007/XMLSchema-versioning 
xmlns:cmn=“http://standards.iso.org/iso-iec/39794/-1 
xmlns=“http://standards.iso.org/iso-iec/39794/-5 
vc:minVersion=“1.0” 
targetNamespace=“http://standards.iso.org/iso-iec/39794/-5 
elementFormDefault=“qualified” 
attributeFormDefault=“unqualified”> 
<xs:import namespace=“http://standards.iso.org/iso-iec/39794/-1” schemaLocation=“iso-iec-39794-1-ed-1-v1.xsd” /> 
<xs:element name=“faceImageData” type=“FaceImageDataBlockType”> 
xs:annotation 
xs:documentationroot element</xs:documentation> 
</xs:annotation> 
</xs:element> 
<xs:complexType name=“FaceImageDataBlockType”> 
xs:sequence 
<xs:element name=“versionBlock” type=“cmn:VersionBlockType” /> 
<xs:element name=“representationBlocks” type=“RepresentationBlocksType” /> 
<xs:any namespace="##other" processContents=“lax” minOccurs=“0” /> 
</xs:sequence> 
</xs:complexType> 
<xs:complexType name=“RepresentationBlocksType”> 
xs:sequence 
<xs:element name=“representationBlock” type=“RepresentationBlockType” 
maxOccurs=“unbounded” /> 
</xs:sequence> 
</xs:complexType> 
<xs:complexType name=“RepresentationBlockType”> 
xs:sequence 
<xs:element name=“representationId” type=“xs:unsignedInt” /> 
<xs:element name=“imageRepresentation” type=“ImageRepresentationType” /> 
<xs:element name=“captureDateTimeBlock” type=“cmn:CaptureDateTimeBlockType” 
minOccurs=“0” /> 
<xs:element name=“qualityBlocks” type=“cmn:QualityBlocksType” minOccurs=“0” /> 
<xs:element name=“padDataBlock” type=“cmn:PADDataBlockType” minOccurs=“0” /> 
<xs:element name=“sessionId” type=“xs:unsignedInt” minOccurs=“0” /> 
<xs:element name=“derivedFrom” type=“xs:unsignedInt” minOccurs=“0” /> 
<xs:element name=“captureDeviceBlock” type=“CaptureDeviceBlockType” minOccurs=“0” /> <xs:element name=“identityMetadataBlock” type=“IdentityMetadataBlockType” 
minOccurs=“0” /> 
<xs:element name=“landmarkBlocks” type=“LandmarkBlocksType” minOccurs=“0” /> 
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="CaptureDeviceBlockType">
        <xs:sequence>
                <xs:element name="modelIdBlock" type="cmn:RegistryIdBlockType" minOccurs="0" />
                <xs:element name="certificationIdBlocks" type="cmn:CertificationIdBlocksType"
minOccurs="0" />
                <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="IdentityMetadataBlockType">
        <xs:sequence>
            <xs:element name="gender" type="GenderType" minOccurs="0" />
            <xs:element name="eyeColour" type="EyeColourType" minOccurs="0" />
            <xs:element name="hairColour" type="HairColourType" minOccurs="0" />
            <xs:element name="subjectHeight" type="SubjectHeightType" minOccurs="0" />
            <xs:element name="propertiesBlock" type="PropertiesBlockType" minOccurs="0" />
            <xs:element name="expressionBlock" type="ExpressionBlockType" minOccurs="0" />
            <xs:element name="poseAngleBlock" type="PoseAngleBlockType" minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="GenderCodeType">
        <xs:choice>
            <xs:element name="unknown" type="xs:int" fixed="0" />
            <xs:element name="other" type="xs:int" fixed="1" />
            <xs:element name="male" type="xs:int" fixed="2" />
            <xs:element name="female" type="xs:int" fixed="3" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="GenderExtensionBlockType">
        <xs:sequence>
            <xs:element name="fallback" type="GenderCodeType" />
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="GenderType">
        <xs:choice>
            <xs:element name="code" type="GenderCodeType" />
            <xs:element name="extensionBlock" type="GenderExtensionBlockType" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="EyeColourCodeType">
        <xs:choice>
            <xs:element name="unknown" type="xs:int" fixed="0" />
            <xs:element name="other" type="xs:int" fixed="1" />
            <xs:element name="black" type="xs:int" fixed="2" />
            <xs:element name="blue" type="xs:int" fixed="3" />
            <xs:element name="brown" type="xs:int" fixed="4" />
            <xs:element name="grey" type="xs:int" fixed="5" />
            <xs:element name="green" type="xs:int" fixed="6" />
            <xs:element name="hazel" type="xs:int" fixed="7" />
            <xs:element name="multi-coloured" type="xs:int" fixed="8" />
            <xs:element name="pink" type="xs:int" fixed="9" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="EyeColourExtensionBlockType">
        <xs:sequence>
            <xs:element name="fallback" type="EyeColourCodeType" />
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
    </xs:complexType>

ISO/IEC 39794-5:2019(E)

<xs:complexType name="EyeColourType">
    <xs:choice>
        <xs:element name="code" type="EyeColourCodeType" />
        <xs:element name="extensionBlock" type="EyeColourExtensionBlockType" />
    </xs:choice>
</xs:complexType>
<xs:complexType name="HairColourCodeType">
    <xs:choice>
        <xs:element name="unknown" type="xs:int" fixed="0" />
        <xs:element name="other" type="xs:int" fixed="1" />
        <xs:element name="bald" type="xs:int" fixed="2" />
        <xs:element name="black" type="xs:int" fixed="3" />
        <xs:element name="blonde" type="xs:int" fixed="4" />
        <xs:element name="brown" type="xs:int" fixed="5" />
        <xs:element name="grey" type="xs:int" fixed="6" />
        <xs:element name="white" type="xs:int" fixed="7" />
        <xs:element name="red" type="xs:int" fixed="8" />
        <xs:element name="knownColoured" type="xs:int" fixed="9" />
    </xs:choice>
</xs:complexType>
<xs:complexType name="HairColourExtensionBlockType">
    <xs:sequence>
        <xs:element name="fallback" type="HairColourCodeType" />
        <xs:any namespace="##other" processContents="lax" />
    </xs:sequence>
</xs:complexType>
<xs:complexType name="HairColourType">
    <xs:choice>
        <xs:element name="code" type="HairColourCodeType" />
        <xs:element name ="extensionBlock" type="HairColourExtensionBlockType" />
    </xs:choice>
</xs:complexType>
<xs:simpleType name="SubjectHeightType">
    <xs:restriction base="xs:unsignedInt">
        <xs:minInclusive value="1" />
        <xs:maxInclusive value="65535" />
    </xs:restriction>
</xs:simpleType>
<xs:complexType name="PropertiesBlockType">
    <xs:sequence>
        <xs:element name="glasses" type="xs:boolean" minOccurs="0" />
        <xs:element name="moustache" type="xs:boolean" minOccurs="0" />
        <xs:element name="beard" type="xs:boolean" minOccurs="0" />
        <xs:element name="teethVisible" type="xs:boolean" minOccurs="0" />
        <xs:element name="pupilOrIrisNotVisible" type="xs:boolean" minOccurs="0" />
        <xs:element name="mouthOpen" type="xs:boolean" minOccurs="0" />
        <xs:element name="leftEyePatch" type="xs:boolean" minOccurs="0" />
        <xs:element name="rightEyePatch" type="xs:boolean" minOccurs="0" />
        <xs:element name="darkGlasses" type="xs:boolean" minOccurs="0" />
        <xs:element name="biometricAbsent" type="xs:boolean" minOccurs="0" />
        <xs:element name="headCoveringsPresent" type="xs:boolean" minOccurs="0" />
        <xs:any namespace="##other" processContents="lax" minOccurs="0" />
    </xs:sequence>
</xs:complexType>
<xs:complexType name="ExpressionBlockType">
    <xs:sequence>
        <xs:element name="neutral" type="xs:boolean" minOccurs="0" />
        <xs:element name="smile" type="xs:boolean" minOccurs="0" />
        <xs:element name="raisedEyebrows" type="xs:boolean" minOccurs="0" />
        <xs:element name="eyesLookingAwayFromTheCamera" type="xs:boolean" minOccurs="0" />
        <xs:element name="squinting" type="xs:boolean" minOccurs="0" />
        <xs:element name="frowning" type="xs:boolean" minOccurs="0" />
        <xs:any namespace="##other" processContents="lax" minOccurs="0" />
    </xs:sequence>
</xs:complexType>
<xs:complexType name="PoseAngleBlockType">
    <xs:sequence>
        <xs:element name="yawAngleBlock" type="AngleDataBlockType" minOccurs="0" />
        <xs:element name="pitchAngleBlock" type="AngleDataBlockType" minOccurs="0" />
        <xs:element name="rollAngleBlock" type="AngleDataBlockType" minOccurs="0" />
    </xs:sequence>
</xs:complexType>
<xs:complexType name="AngleDataBlockType">
    <xs:sequence>
        <xs:element name="angleValue" type="AngleValueType" />
        <xs:element name="angleUncertainty" type="AngleUncertaintyType" minOccurs="0" />
        <xs:any namespace="##other" processContents="lax" minOccurs="0" />
    </xs:sequence>
</xs:complexType>
<xs:simpleType name="AngleValueType">
    <xs:restriction base="xs:integer">
        <xs:minInclusive value="-180" />
        <xs:maxInclusive value="180" />
    </xs:restriction>
</xs:simpleType>
<xs:simpleType name="AngleUncertaintyType">
    <xs:restriction base="xs:integer">
        <xs:minInclusive value="0" />
        <xs:maxInclusive value="180" />
    </xs:restriction>
</xs:simpleType>
<xs:complexType name="LandmarkBlocksType">
    <xs:sequence>
        <xs:element name="landmarkBlock" type="LandmarkBlockType" maxOccurs="unbounded" />
    </xs:sequence>
</xs:complexType>
<xs:complexType name="LandmarkBlockType">
    <xs:sequence>
        <xs:element name="landmarkKind" type="LandmarkKindType" />
        <xs:element name="landmarkCoordinates" type="LandmarkCoordinatesType" minOccurs="0"
/>
        <xs:any namespace="##other" processContents="lax" minOccurs="0" />
    </xs:sequence>
</xs:complexType>
<xs:complexType name="LandmarkKindBaseType">
    <xs:choice>
        <xs:element name="mpeg4FeaturePoint" type="MPEG4FeaturePointType" />
        <xs:element name="anthropometricLandmark" type="AnthropometricLandmarkType" />
    </xs:choice>
</xs:complexType>
<xs:complexType name="LandmarkKindExtensionBlockType">
    <xs:sequence>
        <xs:any namespace="##other" processContents="lax"/>
    </xs:sequence>
</xs:complexType>
<xs:complexType name="LandmarkKindType">
    <xs:choice>
        <xs:element name="base" type="LandmarkKindBaseType"/>
        <xs:element name="extensionBlock" type="LandmarkKindExtensionBlockType"/>
    </xs:choice>
</xs:complexType>
<xs:complexType name="MPEG4FeaturePointCodeType">
    <xs:choice>
        <xs:element name="mpeg4PointCode-02-01" type="xs:int" fixed="0" />
        <xs:element name="mpeg4PointCode-02-02" type="xs:int" fixed="1" />
        <xs:element name="mpeg4PointCode-02-03" type="xs:int" fixed="2" />
<xs:element name="mpeg4PointCode-02-04"  type="xs:int" fixed="3"  />
<xs:element name="mpeg4PointCode-02-05"  type="xs:int" fixed="4"  />
<xs:element name="mpeg4PointCode-02-06"  type="xs:int" fixed="5"  />
<xs:element name="mpeg4PointCode-02-07" type="xs:int" fixed="6" />
<xs:element name="mpeg4PointCode-02-08" type="xs:int" fixed="7" />
<xs:element name="mpeg4PointCode-02-09" type="xs:int" fixed="8" />
<xs:element name="mpeg4PointCode-02-10" type="xs:int" fixed="9" />
<xs:element name="mpeg4PointCode-02-11" type="xs:int" fixed="10" />
<xs:element name="mpeg4PointCode-02-12" type="xs:int" fixed="11" />
<xs:element name="mpeg4PointCode-02-13" type="xs:int" fixed="12" />
<xs:element name="mpeg4PointCode-02-14" type="xs:int" fixed="13" />
<xs:element  <xs:元素 name="mpeg4PointCode-03-01"
名稱="mpeg4 點代碼-03-01"
type="xs:int"  類型="xs:整數" fixed="14"  固定值="14" />
<xs:element name="mpeg4PointCode-03-02" type="xs:int" fixed="15" />
<xs:element name="mpeg4PointCode-03-03" type="xs:int" fixed="16" />
<xs:element name="mpeg4PointCode-03-04" type="xs:int" fixed="17" />
<xs:element name="mpeg4PointCode-03-05" type="xs:int" fixed="18" />
<xs:element name="mpeg4PointCode-03-06" type="xs:int" fixed="19" />
<xs:element name="mpeg4PointCode-03-07" type="xs:int" fixed="20" />
<xs:element name="mpeg4PointCode-03-08" type="xs:int" fixed="21" />
<xs:element  <xs:元素 name="mpeg4PointCode-03-09"
名稱="mpeg4 點代碼-03-09"
type="xs:int"  類型="xs:整數" fixed="22"  固定值="22" />
<xs:element name="mpeg4PointCode-03-10" type="xs:int" fixed="23" />
<xs:element name="mpeg4PointCode-03-11" type="xs:int" fixed="24" />
<xs:element name="mpeg4PointCode-03-12" type="xs:int" fixed="25" />
<xs:element name="mpeg4PointCode-03-13" type="xs:int" fixed="26" />
<xs:element name="mpeg4PointCode-03-14" type="xs:int" fixed="27" />
<xs:element name="mpeg4PointCode-04-01" type="xs:int" fixed="28" />
<xs:element name="mpeg4PointCode-04-02"  type="xs:int"  fixed="29"  /> 
<xs:element  name="mpeg4PointCode-04-03"  type="xs:int"  fixed="30"  /> 
<xs:element  name="mpeg4PointCode-04-04"  type="xs:int"  fixed="31"  /> 
<xs:element  name="mpeg4PointCode-04-05"  type="xs:int"  fixed="32"  /> 
<xs:element  name="mpeg4PointCode-04-06"  type="xs:int"  fixed="33"  /> 
<xs:element  name="mpeg4PointCode-05-01"  type="xs:int"  fixed="34"  /> 
<xs:element  name="mpeg4PointCode-05-02"  type="xs:int"  fixed="35"  /> 
<xs:element  name="mpeg4PointCode-05-03"  type="xs:int"  fixed="36"  /> 
<xs:element  name="mpeg4PointCode-05-04"  type="xs:int"  fixed="37"  /> 
<xs:element  name="mpeg4PointCode-06-01"  type="xs:int"  fixed="38"  /> 
<xs:element  name="mpeg4PointCode-06-02"  type="xs:int"  fixed="39"  /> 
<xs:element  name="mpeg4PointCode-06-03"  type="xs:int"  fixed="40"  /> 
<xs:element  name="mpeg4PointCode-06-04"  type="xs:int"  fixed="41"  /> 
<xs:element  name="mpeg4PointCode-07-01"  type="xs:int"  fixed="42"  /> 
<xs:element  name="mpeg4PointCode-08-01"  type="xs:int"  fixed="43"  /> 
<xs:element  name="mpeg4PointCode-08-02"  type="xs:int"  fixed="44"  /> 
<xs:element  name="mpeg4PointCode-08-03"  type="xs:int"  fixed="45"  /> 
<xs:element  name="mpeg4PointCode-08-04"  type="xs:int"  fixed="46"  /> 
<xs:element  name="mpeg4PointCode-08-05"  type="xs:int"  fixed="47"  /> 
<xs:element  name="mpeg4PointCode-08-06"  type="xs:int"  fixed="48"  /> 
<xs:element  name="mpeg4PointCode-08-07"  type="xs:int"  fixed="49"  /> 
<xs:element  name="mpeg4PointCode-08-08"  type="xs:int"  fixed="50"  /> 
<xs:element  name="mpeg4PointCode-08-09"  type="xs:int"  fixed="51"  /> 
<xs:element  name="mpeg4PointCode-08-10"  type="xs:int"  fixed="52"  /> 
<xs:element  name="mpeg4PointCode-09-01"  type="xs:int"  fixed="53"  /> 
<xs:element  name="mpeg4PointCode-09-02"  type="xs:int"  fixed="54"  /> 
<xs:element  name="mpeg4PointCode-09-03"  type="xs:int"  fixed="55"  /> 
<xs:element  name="mpeg4PointCode-09-04"  type="xs:int"  fixed="56"  /> 
<xs:element  name="mpeg4PointCode-09-05"  type="xs:int"  fixed="57"  /> 
<xs:element  name="mpeg4PointCode-09-06"  type="xs:int"  fixed="58"  /> 
<xs:element  name="mpeg4PointCode-09-07"  type="xs:int"  fixed="59"  /> 
<xs:element  name="mpeg4PointCode-09-08"  type="xs:int"  fixed="60"  /> 
<xs:element  name="mpeg4PointCode-09-09"  type="xs:int"  fixed="61"  /> 
<xs:element  name="mpeg4PointCode-09-10"  type="xs:int"  fixed="62"  /> 
<xs:element  name="mpeg4PointCode-09-11"  type="xs:int"  fixed="63"  /> 
<xs:element  name="mpeg4PointCode-09-12"  type="xs:int"  fixed="64"  /> 
<xs:element  name="mpeg4PointCode-09-13"  type="xs:int"  fixed="65"  /> 
<xs:element  name="mpeg4PointCode-09-14"  type="xs:int"  fixed="66"  /> 
<xs:element  name="mpeg4PointCode-09-15"  type="xs:int"  fixed="67"  /> 
<xs:element  name="mpeg4PointCode-10-01"  type="xs:int"  fixed="68"  /> 
<xs:element  name="mpeg4PointCode-10-02" type="xs:int" fixed="69"  固定=“69” />  《ISO IEC 39794-5-2019》
<xs:element name="mpeg4PointCode-10-03" type="xs:int" fixed="70" />
<xs:element name="mpeg4PointCode-10-04" type="xs:int" fixed="71" />
<xs:element name="mpeg4PointCode-10-05" type="xs:int" fixed="72" />
<xs:element name="mpeg4PointCode-10-06" type="xs:int" fixed="73" />
<xs:element name="mpeg4PointCode-02-04" type="xs:int" fixed="3" /> <xs:element name="mpeg4PointCode-02-05" type="xs:int" fixed="4" /> <xs:element name="mpeg4PointCode-02-06" type="xs:int" fixed="5" /> <xs:element name="mpeg4PointCode-02-07" type="xs:int" fixed="6" /> <xs:element name="mpeg4PointCode-02-08" type="xs:int" fixed="7" /> <xs:element name="mpeg4PointCode-02-09" type="xs:int" fixed="8" /> <xs:element name="mpeg4PointCode-02-10" type="xs:int" fixed="9" /> <xs:element name="mpeg4PointCode-02-11" type="xs:int" fixed="10" /> <xs:element name="mpeg4PointCode-02-12" type="xs:int" fixed="11" /> <xs:element name="mpeg4PointCode-02-13" type="xs:int" fixed="12" /> <xs:element name="mpeg4PointCode-02-14" type="xs:int" fixed="13" /> <xs:element name="mpeg4PointCode-03-01" type="xs:int" fixed="14" /> <xs:element name="mpeg4PointCode-03-02" type="xs:int" fixed="15" /> <xs:element name="mpeg4PointCode-03-03" type="xs:int" fixed="16" /> <xs:element name="mpeg4PointCode-03-04" type="xs:int" fixed="17" /> <xs:element name="mpeg4PointCode-03-05" type="xs:int" fixed="18" /> <xs:element name="mpeg4PointCode-03-06" type="xs:int" fixed="19" /> <xs:element name="mpeg4PointCode-03-07" type="xs:int" fixed="20" /> <xs:element name="mpeg4PointCode-03-08" type="xs:int" fixed="21" /> <xs:element name="mpeg4PointCode-03-09" type="xs:int" fixed="22" /> <xs:element name="mpeg4PointCode-03-10" type="xs:int" fixed="23" /> <xs:element name="mpeg4PointCode-03-11" type="xs:int" fixed="24" /> <xs:element name="mpeg4PointCode-03-12" type="xs:int" fixed="25" /> <xs:element name="mpeg4PointCode-03-13" type="xs:int" fixed="26" /> <xs:element name="mpeg4PointCode-03-14" type="xs:int" fixed="27" /> <xs:element name="mpeg4PointCode-04-01" type="xs:int" fixed="28" /> <xs:element name="mpeg4PointCode-04-02" type="xs:int" fixed="29" /> <xs:element name="mpeg4PointCode-04-03" type="xs:int" fixed="30" /> <xs:element name="mpeg4PointCode-04-04" type="xs:int" fixed="31" /> <xs:element name="mpeg4PointCode-04-05" type="xs:int" fixed="32" /> <xs:element name="mpeg4PointCode-04-06" type="xs:int" fixed="33" /> <xs:element name="mpeg4PointCode-05-01" type="xs:int" fixed="34" /> <xs:element name="mpeg4PointCode-05-02" type="xs:int" fixed="35" /> <xs:element name="mpeg4PointCode-05-03" type="xs:int" fixed="36" /> <xs:element name="mpeg4PointCode-05-04" type="xs:int" fixed="37" /> <xs:element name="mpeg4PointCode-06-01" type="xs:int" fixed="38" /> <xs:element name="mpeg4PointCode-06-02" type="xs:int" fixed="39" /> <xs:element name="mpeg4PointCode-06-03" type="xs:int" fixed="40" /> <xs:element name="mpeg4PointCode-06-04" type="xs:int" fixed="41" /> <xs:element name="mpeg4PointCode-07-01" type="xs:int" fixed="42" /> <xs:element name="mpeg4PointCode-08-01" type="xs:int" fixed="43" /> <xs:element name="mpeg4PointCode-08-02" type="xs:int" fixed="44" /> <xs:element name="mpeg4PointCode-08-03" type="xs:int" fixed="45" /> <xs:element name="mpeg4PointCode-08-04" type="xs:int" fixed="46" /> <xs:element name="mpeg4PointCode-08-05" type="xs:int" fixed="47" /> <xs:element name="mpeg4PointCode-08-06" type="xs:int" fixed="48" /> <xs:element name="mpeg4PointCode-08-07" type="xs:int" fixed="49" /> <xs:element name="mpeg4PointCode-08-08" type="xs:int" fixed="50" /> <xs:element name="mpeg4PointCode-08-09" type="xs:int" fixed="51" /> <xs:element name="mpeg4PointCode-08-10" type="xs:int" fixed="52" /> <xs:element name="mpeg4PointCode-09-01" type="xs:int" fixed="53" /> <xs:element name="mpeg4PointCode-09-02" type="xs:int" fixed="54" /> <xs:element name="mpeg4PointCode-09-03" type="xs:int" fixed="55" /> <xs:element name="mpeg4PointCode-09-04" type="xs:int" fixed="56" /> <xs:element name="mpeg4PointCode-09-05" type="xs:int" fixed="57" /> <xs:element name="mpeg4PointCode-09-06" type="xs:int" fixed="58" /> <xs:element name="mpeg4PointCode-09-07" type="xs:int" fixed="59" /> <xs:element name="mpeg4PointCode-09-08" type="xs:int" fixed="60" /> <xs:element name="mpeg4PointCode-09-09" type="xs:int" fixed="61" /> <xs:element name="mpeg4PointCode-09-10" type="xs:int" fixed="62" /> <xs:element name="mpeg4PointCode-09-11" type="xs:int" fixed="63" /> <xs:element name="mpeg4PointCode-09-12" type="xs:int" fixed="64" /> <xs:element name="mpeg4PointCode-09-13" type="xs:int" fixed="65" /> <xs:element name="mpeg4PointCode-09-14" type="xs:int" fixed="66" /> <xs:element name="mpeg4PointCode-09-15" type="xs:int" fixed="67" /> <xs:element name="mpeg4PointCode-10-01" type="xs:int" fixed="68" /> <xs:element name="mpeg4PointCode-10-02" type="xs:int" fixed="69" /> <xs:element name="mpeg4PointCode-10-03" type="xs:int" fixed="70" /> <xs:element name="mpeg4PointCode-10-04" type="xs:int" fixed="71" /> <xs:element name="mpeg4PointCode-10-05" type="xs:int" fixed="72" /> <xs:element name="mpeg4PointCode-10-06" type="xs:int" fixed="73" />| <xs:element | name="mpeg4PointCode-02-04" | type="xs:int" | fixed="3" | /> | | :--- | :--- | :--- | :--- | :--- | | <xs:element | name="mpeg4PointCode-02-05" | type="xs:int" | fixed="4" | /> | | <xs:element | name="mpeg4PointCode-02-06" | type="xs:int" | fixed="5" | /> | | <xs:element | name="mpeg4PointCode-02-07" | type="xs:int" | fixed="6" | /> | | <xs:element | name="mpeg4PointCode-02-08" | type="xs:int" | fixed="7" | /> | | <xs:element | name="mpeg4PointCode-02-09" | type="xs:int" | fixed="8" | /> | | <xs:element | name="mpeg4PointCode-02-10" | type="xs:int" | fixed="9" | /> | | <xs:element | name="mpeg4PointCode-02-11" | type="xs:int" | fixed="10" | /> | | <xs:element | name="mpeg4PointCode-02-12" | type="xs:int" | fixed="11" | /> | | <xs:element | name="mpeg4PointCode-02-13" | type="xs:int" | fixed="12" | /> | | <xs:element | name="mpeg4PointCode-02-14" | type="xs:int" | fixed="13" | /> | | <xs:element | name="mpeg4PointCode-03-01" | type="xs:int" | fixed="14" | /> | | <xs:element | name="mpeg4PointCode-03-02" | type="xs:int" | fixed="15" | /> | | <xs:element | name="mpeg4PointCode-03-03" | type="xs:int" | fixed="16" | /> | | <xs:element | name="mpeg4PointCode-03-04" | type="xs:int" | fixed="17" | /> | | <xs:element | name="mpeg4PointCode-03-05" | type="xs:int" | fixed="18" | /> | | <xs:element | name="mpeg4PointCode-03-06" | type="xs:int" | fixed="19" | /> | | <xs:element | name="mpeg4PointCode-03-07" | type="xs:int" | fixed="20" | /> | | <xs:element | name="mpeg4PointCode-03-08" | type="xs:int" | fixed="21" | /> | | <xs:element | name="mpeg4PointCode-03-09" | type="xs:int" | fixed="22" | /> | | <xs:element | name="mpeg4PointCode-03-10" | type="xs:int" | fixed="23" | /> | | <xs:element | name="mpeg4PointCode-03-11" | type="xs:int" | fixed="24" | /> | | <xs:element | name="mpeg4PointCode-03-12" | type="xs:int" | fixed="25" | /> | | <xs:element | name="mpeg4PointCode-03-13" | type="xs:int" | fixed="26" | /> | | <xs:element | name="mpeg4PointCode-03-14" | type="xs:int" | fixed="27" | /> | | <xs:element | name="mpeg4PointCode-04-01" | type="xs:int" | fixed="28" | /> | | <xs:element | name="mpeg4PointCode-04-02" | type="xs:int" | fixed="29" | /> | | <xs:element | name="mpeg4PointCode-04-03" | type="xs:int" | fixed="30" | /> | | <xs:element | name="mpeg4PointCode-04-04" | type="xs:int" | fixed="31" | /> | | <xs:element | name="mpeg4PointCode-04-05" | type="xs:int" | fixed="32" | /> | | <xs:element | name="mpeg4PointCode-04-06" | type="xs:int" | fixed="33" | /> | | <xs:element | name="mpeg4PointCode-05-01" | type="xs:int" | fixed="34" | /> | | <xs:element | name="mpeg4PointCode-05-02" | type="xs:int" | fixed="35" | /> | | <xs:element | name="mpeg4PointCode-05-03" | type="xs:int" | fixed="36" | /> | | <xs:element | name="mpeg4PointCode-05-04" | type="xs:int" | fixed="37" | /> | | <xs:element | name="mpeg4PointCode-06-01" | type="xs:int" | fixed="38" | /> | | <xs:element | name="mpeg4PointCode-06-02" | type="xs:int" | fixed="39" | /> | | <xs:element | name="mpeg4PointCode-06-03" | type="xs:int" | fixed="40" | /> | | <xs:element | name="mpeg4PointCode-06-04" | type="xs:int" | fixed="41" | /> | | <xs:element | name="mpeg4PointCode-07-01" | type="xs:int" | fixed="42" | /> | | <xs:element | name="mpeg4PointCode-08-01" | type="xs:int" | fixed="43" | /> | | <xs:element | name="mpeg4PointCode-08-02" | type="xs:int" | fixed="44" | /> | | <xs:element | name="mpeg4PointCode-08-03" | type="xs:int" | fixed="45" | /> | | <xs:element | name="mpeg4PointCode-08-04" | type="xs:int" | fixed="46" | /> | | <xs:element | name="mpeg4PointCode-08-05" | type="xs:int" | fixed="47" | /> | | <xs:element | name="mpeg4PointCode-08-06" | type="xs:int" | fixed="48" | /> | | <xs:element | name="mpeg4PointCode-08-07" | type="xs:int" | fixed="49" | /> | | <xs:element | name="mpeg4PointCode-08-08" | type="xs:int" | fixed="50" | /> | | <xs:element | name="mpeg4PointCode-08-09" | type="xs:int" | fixed="51" | /> | | <xs:element | name="mpeg4PointCode-08-10" | type="xs:int" | fixed="52" | /> | | <xs:element | name="mpeg4PointCode-09-01" | type="xs:int" | fixed="53" | /> | | <xs:element | name="mpeg4PointCode-09-02" | type="xs:int" | fixed="54" | /> | | <xs:element | name="mpeg4PointCode-09-03" | type="xs:int" | fixed="55" | /> | | <xs:element | name="mpeg4PointCode-09-04" | type="xs:int" | fixed="56" | /> | | <xs:element | name="mpeg4PointCode-09-05" | type="xs:int" | fixed="57" | /> | | <xs:element | name="mpeg4PointCode-09-06" | type="xs:int" | fixed="58" | /> | | <xs:element | name="mpeg4PointCode-09-07" | type="xs:int" | fixed="59" | /> | | <xs:element | name="mpeg4PointCode-09-08" | type="xs:int" | fixed="60" | /> | | <xs:element | name="mpeg4PointCode-09-09" | type="xs:int" | fixed="61" | /> | | <xs:element | name="mpeg4PointCode-09-10" | type="xs:int" | fixed="62" | /> | | <xs:element | name="mpeg4PointCode-09-11" | type="xs:int" | fixed="63" | /> | | <xs:element | name="mpeg4PointCode-09-12" | type="xs:int" | fixed="64" | /> | | <xs:element | name="mpeg4PointCode-09-13" | type="xs:int" | fixed="65" | /> | | <xs:element | name="mpeg4PointCode-09-14" | type="xs:int" | fixed="66" | /> | | <xs:element | name="mpeg4PointCode-09-15" | type="xs:int" | fixed="67" | /> | | <xs:element | name="mpeg4PointCode-10-01" | type="xs:int" | fixed="68" | /> | | <xs:element | name="mpeg4PointCode-10-02" | type="xs:int" | fixed="69" | /> | | <xs:element | name="mpeg4PointCode-10-03" | type="xs:int" | fixed="70" | /> | | <xs:element | name="mpeg4PointCode-10-04" | type="xs:int" | fixed="71" | /> | | <xs:element | name="mpeg4PointCode-10-05" | type="xs:int" | fixed="72" | /> | | <xs:element | name="mpeg4PointCode-10-06" | type="xs:int" | fixed="73" | /> |
            <xs:element name="mpeg4PointCode-10-07" type="xs:int" fixed="74" />
            <xs:element name="mpeg4PointCode-10-08" type="xs:int" fixed="75" />
            <xs:element name="mpeg4PointCode-10-09" type="xs:int" fixed="76" />
            <xs:element name="mpeg4PointCode-10-10" type="xs:int" fixed="77" />
            <xs:element name="mpeg4PointCode-11-01" type="xs:int" fixed="78" />
            <xs:element name="mpeg4PointCode-11-02" type="xs:int" fixed="79" />
            <xs:element name="mpeg4PointCode-11-03" type="xs:int" fixed="80" />
            <xs:element name="mpeg4PointCode-11-04" type="xs:int" fixed="81" />
            <xs:element name="mpeg4PointCode-11-05" type="xs:int" fixed="82" />
            <xs:element name="mpeg4PointCode-11-06" type="xs:int" fixed="83" />
            <xs:element name="mpeg4PointCode-12-01" type="xs:int" fixed="84" />
            <xs:element name="mpeg4PointCode-12-02" type="xs:int" fixed="85" />
            <xs:element name="mpeg4PointCode-12-03" type="xs:int" fixed="86" />
                <xs:element name="mpeg4PointCode-12-04" type="xs:int" fixed="87" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="MPEG4FeaturePointExtensionBlockType">
        <xs:sequence>
            <xs:element name="fallback" type="MPEG4FeaturePointCodeType" />
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="MPEG4FeaturePointType">
        <xs:choice>
                <xs:element name="code" type="MPEG4FeaturePointCodeType" />
                <xs:element name="extensionBlock" type="MPEG4FeaturePointExtensionBlockType" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="AnthropometricLandmarkBaseType">
        <xs:choice>
            <xs:element name="anthropometricLandmarkName" type="AnthropometricLandmarkNameType"
/>
                <xs:element name="anthropometricLandmarkPointName" type="AnthropometricLandmarkPoint
NameType" />
                <xs:element name="anthropometricLandmarkPointId" type="AnthropometricLandmarkPointId
Type" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="AnthropometricLandmarkExtensionBlockType">
        <xs:sequence>
                <xs:any namespace="##other" processContents="lax"/>
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="AnthropometricLandmarkType">
        <xs:choice>
                <xs:element name="base" type="AnthropometricLandmarkBaseType"/>
                <xs:element name="extensionBlock" type="AnthropometricLandmarkExtensionBlockType"/>
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="AnthropometricLandmarkNameCodeType">
        <xs:choice>
            <xs:element name="vertex" type="xs:int" fixed="0" />
            <xs:element name="glabella" type="xs:int" fixed="1" />
            <xs:element name="opisthocranion" type="xs:int" fixed="2" />
            <xs:element name="eurionLeft" type="xs:int" fixed="3" />
            <xs:element name="eurionRight" type="xs:int" fixed="4" />
            <xs:element name="frontotemporaleLeft" type="xs:int" fixed="5" />
            <xs:element name="frontotemporaleRight" type="xs:int" fixed="6" />
            <xs:element name="trichion" type="xs:int" fixed="7" />
            <xs:element name="zygionLeft" type="xs:int" fixed="8" />
            <xs:element name="zygionRight" type="xs:int" fixed="9" />
            <xs:element name="gonionLeft" type="xs:int" fixed="10" />
            <xs:element name="gonionRight" type="xs:int" fixed="11" />
            <xs:element name="sublabiale" type="xs:int" fixed="12" />
            <xs:element name="pogonion" type="xs:int" fixed="13" />

ISO/IEC 39794-5:2019(E)

        <xs:element name="menton" type="xs:int" fixed="14" />
        <xs:element name="condylionLateraleLeft" type="xs:int" fixed="15" />
        <xs:element name="condylionLateraleRight" type="xs:int" fixed="16" />
        <xs:element name="endocanthionLeft" type="xs:int" fixed="17" />
        <xs:element name="endocanthionRight" type="xs:int" fixed="18" />
        <xs:element name="exocanthionLeft" type="xs:int" fixed="19" />
        <xs:element name="exocanthionRight" type="xs:int" fixed="20" />
        <xs:element name="centerPointOfPupilLeft" type="xs:int" fixed="21" />
        <xs:element name="centerPointOfPupilRight" type="xs:int" fixed="22" />
        <xs:element name="orbitaleLeft" type="xs:int" fixed="23" />
        <xs:element name="orbitaleRight" type="xs:int" fixed="24" />
        <xs:element name="palpebraleSuperiusLeft" type="xs:int" fixed="25" />
        <xs:element name="palpebraleSuperiusRight" type="xs:int" fixed="26" />
        <xs:element name="palpebraleInferiusLeft" type="xs:int" fixed="27" />
        <xs:element name="palpebraleInferiusRight" type="xs:int" fixed="28" />
        <xs:element name="orbitaleSuperiusLeft" type="xs:int" fixed="29" />
        <xs:element name="orbitaleSuperiusRight" type="xs:int" fixed="30" />
        <xs:element name="superciliareLeft" type="xs:int" fixed="31" />
        <xs:element name="superciliareRight" type="xs:int" fixed="32" />
        <xs:element name="nasion" type="xs:int" fixed="33" />
        <xs:element name="sellion" type="xs:int" fixed="34" />
        <xs:element name="alareLeft" type="xs:int" fixed="35" />
        <xs:element name="alareRight" type="xs:int" fixed="36" />
        <xs:element name="pronasale" type="xs:int" fixed="37" />
        <xs:element name="subnasale" type="xs:int" fixed="38" />
        <xs:element name="subalare" type="xs:int" fixed="39" />
        <xs:element name="alarCurvatureLeft" type="xs:int" fixed="40" />
        <xs:element name="alarCurvatureRight" type="xs:int" fixed="41" />
        <xs:element name="maxillofrontale" type="xs:int" fixed="42" />
        <xs:element name="christaPhiltraLandmarkLeft" type="xs:int" fixed="43" />
        <xs:element name="christaPhiltraLandmarkRight" type="xs:int" fixed="44" />
        <xs:element name="labialeSuperius" type="xs:int" fixed="45" />
        <xs:element name="labialeInferius" type="xs:int" fixed="46" />
        <xs:element name="cheilionLeft" type="xs:int" fixed="47" />
        <xs:element name="cheilionRight" type="xs:int" fixed="48" />
        <xs:element name="stomion" type="xs:int" fixed="49" />
        <xs:element name="superauraleLeft" type="xs:int" fixed="50" />
        <xs:element name="superauraleRight" type="xs:int" fixed="51" />
        <xs:element name="subauraleLeft" type="xs:int" fixed="52" />
        <xs:element name="subauraleRight" type="xs:int" fixed="53" />
        <xs:element name="preaurale" type="xs:int" fixed="54" />
        <xs:element name="postaurale" type="xs:int" fixed="55" />
        <xs:element name="otobasionSuperiusLeft" type="xs:int" fixed="56" />
        <xs:element name="otobasionSuperiusRight" type="xs:int" fixed="57" />
        <xs:element name="otobasionInferius" type="xs:int" fixed="58" />
        <xs:element name="porion" type="xs:int" fixed="59" />
        <xs:element name="tragion" type="xs:int" fixed="60" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="AnthropometricLandmarkNameExtensionBlockType">
        <xs:sequence>
            <xs:element name="fallback" type="AnthropometricLandmarkNameCodeType" />
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="AnthropometricLandmarkNameType">
        <xs:choice>
            <xs:element name="code" type="AnthropometricLandmarkNameCodeType" />
            <xs:element name="extensionBlock" type="AnthropometricLandmarkNameExtensionBlockT
ype" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="AnthropometricLandmarkPointNameCodeType">
        <xs:choice>
            <xs:element name="pointCode-01-01" type="xs:int" fixed="0" />
            <xs:element name="pointCode-01-02" type="xs:int" fixed="1" />
            <xs:element name="pointCode-01-05" type="xs:int" fixed="2" />
            <xs:element name="pointCode-01-06" type="xs:int" fixed="3" />
        <xs:element name="pointCode-01-07" type="xs:int" fixed="4" />
            type="xs:int" fixed="5" />
        <xs:element name="pointCode-01-09" type="xs:int" fixed="6" />
        <xs:element name="pointCode-02-01" type="xs:int" fixed="7" />
        <xs:element name="pointCode-02-02" type="xs:int" fixed="8" />
        <xs:element name="pointCode-02-03" type="xs:int" fixed="9" />
        <xs:element name="pointCode-02-04" type="xs:int" fixed="10" />
        <xs:element name="pointCode-02-05" type="xs:int" fixed="11" />
        <xs:element name="pointCode-02-06" type="xs:int" fixed="12" />
        <xs:element name="pointCode-02-07" type="xs:int" fixed="13" />
        <xs:element name="pointCode-02-09" type="xs:int" fixed="14" />
        <xs:element name="pointCode-02-10" type="xs:int" fixed="15" />
        <xs:element name="pointCode-03-01" type="xs:int" fixed="16" />
        <xs:element name="pointCode-03-02" type="xs:int" fixed="17" />
        <xs:element name="pointCode-03-03" type="xs:int" fixed="18" />
        <xs:element name="pointCode-03-04" type="xs:int" fixed="19" />
        <xs:element name="pointCode-03-05" type="xs:int" fixed="20" />
        <xs:element name="pointCode-03-06" type="xs:int" fixed="21" />
        <xs:element name="pointCode-03-07" type="xs:int" fixed="22" />
        <xs:element name="pointCode-03-08" type="xs:int" fixed="23" />
        <xs:element name="pointCode-03-09" type="xs:int" fixed="24" />
        <xs:element name="pointCode-03-10" type="xs:int" fixed="25" />
        <xs:element name="pointCode-03-11" type="xs:int" fixed="26" />
        <xs:element name="pointCode-03-12" type="xs:int" fixed="27" />
        <xs:element name="pointCode-04-01" type="xs:int" fixed="28" />
        <xs:element name="pointCode-04-02" type="xs:int" fixed="29" />
        <xs:element name="pointCode-04-03" type="xs:int" fixed="30" />
        <xs:element name="pointCode-04-04" type="xs:int" fixed="31" />
        <xs:element name="pointCode-05-01" type="xs:int" fixed="32" />
        <xs:element name="pointCode-05-02" type="xs:int" fixed="33" />
        <xs:element name="pointCode-05-03" type="xs:int" fixed="34" />
        <xs:element name="pointCode-05-04" type="xs:int" fixed="35" />
        <xs:element name="pointCode-05-06" type="xs:int" fixed="36" />
        </xs:choice>
</xs:complexType>
<xs:complexType name="AnthropometricLandmarkPointNameExtensionBlockType">
        <xs:sequence>
            <xs:element name="fallback" type="AnthropometricLandmarkPointNameCodeType" />
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
</xs:complexType>
<xs:complexType name="AnthropometricLandmarkPointNameType">
        <xs:choice>
            <xs:element name="code" type="AnthropometricLandmarkPointNameCodeType" />
            <xs:element name="extensionBlock" type="AnthropometricLandmarkPointNameExtensionBloc
kType" />
        </xs:choice>
</xs:complexType>
<xs:complexType name="AnthropometricLandmarkPointIdCodeType">
        <xs:choice>
            <xs:element name="v" type="xs:int" fixed="0" />
            <xs:element name="g" type="xs:int" fixed="1" />
            <xs:element name="op" type="xs:int" fixed="2" />
            <xs:element name="eu-left" type="xs:int" fixed="3" />
            <xs:element name="eu-right" type="xs:int" fixed="4" />
            <xs:element name="ft-left" type="xs:int" fixed="5" />
            <xs:element name="ft-right" type="xs:int" fixed="6" />
            <xs:element name="tr" type="xs:int" fixed="7" />
            <xs:element name="zy-left" type="xs:int" fixed="8" />
            <xs:element name="zy-right" type="xs:int" fixed="9" />
            <xs:element name="go-left" type="xs:int" fixed="10" />
            <xs:element name="go-right" type="xs:int" fixed="11" />
            <xs:element name="sl" type="xs:int" fixed="12" />
            <xs:element name="pg" type="xs:int" fixed="13" />
            <xs:element name="gn" type="xs:int" fixed="14" />
            <xs:element name="cdl-left" type="xs:int" fixed="15" />
            <xs:element name="cdl-right" type="xs:int" fixed="16" />
            <xs:element name="en-left" type="xs:int" fixed="17" />

ISO/IEC 39794-5:2019(E)

        <xs:element name="en-right" type="xs:int" fixed="18" />
        <xs:element name="ex-left" type="xs:int" fixed="19" />
        <xs:element name="ex-right" type="xs:int" fixed="20" />
        <xs:element name="p-left" type="xs:int" fixed="21" />
        <xs:element name="p-right" type="xs:int" fixed="22" />
        <xs:element name="or-left" type="xs:int" fixed="23" />
        <xs:element name="or-right" type="xs:int" fixed="24" />
        <xs:element name="ps-left" type="xs:int" fixed="25" />
        <xs:element name="ps-right" type="xs:int" fixed="26" />
        <xs:element name="pi-left" type="xs:int" fixed="27" />
        <xs:element name="pi-right" type="xs:int" fixed="28" />
        <xs:element name="os-left" type="xs:int" fixed="29" />
        <xs:element name="os-right" type="xs:int" fixed="30" />
        <xs:element name="sci-left" type="xs:int" fixed="31" />
        <xs:element name="sci-right" type="xs:int" fixed="32" />
        <xs:element name="n" type="xs:int" fixed="33" />
        <xs:element name="se" type="xs:int" fixed="34" />
        <xs:element name="al-left" type="xs:int" fixed="35" />
        <xs:element name="al-right" type="xs:int" fixed="36" />
        <xs:element name="prn" type="xs:int" fixed="37" />
        <xs:element name="sn" type="xs:int" fixed="38" />
        <xs:element name="sbal" type="xs:int" fixed="39" />
        <xs:element name="ac-left" type="xs:int" fixed="40" />
        <xs:element name="ac-right" type="xs:int" fixed="41" />
        <xs:element name="mf-left" type="xs:int" fixed="42" />
        <xs:element name="mf-right" type="xs:int" fixed="43" />
        <xs:element name="cph-left" type="xs:int" fixed="44" />
        <xs:element name="cph-right" type="xs:int" fixed="45" />
        <xs:element name="ls" type="xs:int" fixed="46" />
        <xs:element name="li" type="xs:int" fixed="47" />
        <xs:element name="ch-left" type="xs:int" fixed="48" />
        <xs:element name="ch-right" type="xs:int" fixed="49" />
        <xs:element name="sto" type="xs:int" fixed="50" />
        <xs:element name="sa-left" type="xs:int" fixed="51" />
        <xs:element name="sa-right" type="xs:int" fixed="52" />
        <xs:element name="sba-left" type="xs:int" fixed="53" />
        <xs:element name="sba-right" type="xs:int" fixed="54" />
        <xs:element name="pra-left" type="xs:int" fixed="55" />
        <xs:element name="pra-right" type="xs:int" fixed="56" />
        <xs:element name="pa" type="xs:int" fixed="57" />
        <xs:element name="obs-left" type="xs:int" fixed="58" />
        <xs:element name="obs-right" type="xs:int" fixed="59" />
        <xs:element name="obi" type="xs:int" fixed="60" />
        <xs:element name="po" type="xs:int" fixed="61" />
        <xs:element name="t" type="xs:int" fixed="62" />
        /xs:choice>
    </xs:complexType>
    <xs:complexType name="AnthropometricLandmarkPointIdExtensionBlockType">
        <xs:sequence>
            <xs:element name="fallback" type="AnthropometricLandmarkPointIdCodeType" />
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="AnthropometricLandmarkPointIdType">
        <xs:choice>
            <xs:element name="code" type="AnthropometricLandmarkPointIdCodeType" />
            <xs:element name="extensionBlock" type="AnthropometricLandmarkPointIdExtensionBlockT
ype" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="LandmarkCoordinatesBaseType">
        <xs:choice>
            <xs:element name="coordinateCartesian2DBlock" type="cmn:CoordinateCartesian2DUnsigne
dShortBlockType" />
            <xs:element name="coordinateTextureImageBlock" type="CoordinateTextureImageBlockT
ype" />
            <xs:element name="coordinateCartesian3DBlock" type="cmn:CoordinateCartesian3DUnsigne
dShortBlockType" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="LandmarkCoordinatesExtensionBlockType">
        <xs:sequence>
            <xs:any namespace="##other" processContents="lax"/>
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="LandmarkCoordinatesType">
        <xs:choice>
            <xs:element name="base" type="LandmarkCoordinatesBaseType"/>
            <xs:element name="extensionBlock" type="LandmarkCoordinatesExtensionBlockType"/>
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="CoordinateTextureImageBlockType">
        <xs:sequence>
            <xs:element name="uInPixel" type="xs:unsignedInt" />
            <xs:element name="vInPixel" type="xs:unsignedInt" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="ImageRepresentationType">
        <xs:choice>
            <xs:element name="base" type="ImageRepresentationBaseType" />
            <xs:element name="extensionBlock" type="ImageRepresentationExtensionBlockType" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="ImageRepresentationBaseType">
        <xs:choice>
            <xs:element name="imageRepresentation2DBlock" type="ImageRepresentation2DBlockType"
/>
            <xs:element name="shapeRepresentation3DBlock" type="ShapeRepresentation3DBlockType"
/>
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="ImageRepresentationExtensionBlockType">
        <xs:sequence>
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="ImageRepresentation2DBlockType">
        <xs:sequence>
            <xs:element name="representationData2D" type="xs:base64Binary" />
            <xs:element name="imageInformation2DBlock" type="ImageInformation2DBlockType" />
            <xs:element name="captureDevice2DBlock" type="CaptureDevice2DBlockType"
minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="CaptureDevice2DBlockType">
        <xs:sequence>
            <xs:element name="captureDeviceSpectral2DBlock" type="CaptureDeviceSpectral2DBlockT
ype" minOccurs="0" />
            <xs:element name="captureDeviceTechnologyId2D" type="CaptureDeviceTechnologyId2DT
ype" minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="CaptureDeviceSpectral2DBlockType">
        <xs:sequence>
            <xs:element name="whiteLight" type="xs:boolean" minOccurs="0" />
            <xs:element name="nearInfrared" type="xs:boolean" minOccurs="0" />
            <xs:element name="thermal" type="xs:boolean" minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />

ISO/IEC 39794-5:2019(E)

        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="CaptureDeviceTechnologyId2DCodeType">
        <xs:choice>
            <xs:element name="unknown" type="xs:int" fixed="0" />
            <xs:element name="staticPhotographFromUnknownSource" type="xs:int" fixed="1" />
            <xs:element name="staticPhotographFromDigitalStillImageCamera" type="xs:int"
fixed="2" />
            <xs:element name="staticPhotographFromScanner" type="xs:int" fixed="3" />
            <xs:element name="videoFrameFromUnknownSource" type="xs:int" fixed="4" />
            <xs:element name="videoFrameFromAnalogueVideoCamera" type="xs:int" fixed="5" />
            <xs:element name="videoFrameFromDigitalVideoCamera" type="xs:int" fixed="6" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="CaptureDeviceTechnologyId2DExtensionBlockType">
        <xs:sequence>
            <xs:element name="fallback" type="CaptureDeviceTechnologyId2DCodeType" />
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="CaptureDeviceTechnologyId2DType">
        <xs:choice>
            <xs:element name="code" type="CaptureDeviceTechnologyId2DCodeType" />
            <xs:element name ="extensionBlock" type="CaptureDeviceTechnologyId2DExtensionBlockT
ype" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="ImageInformation2DBlockType">
        <xs:sequence>
            <xs:element name="imageDataFormat" type="ImageDataFormatType" />
            <xs:element name="faceImageKind2D" type="FaceImageKind2DType" minOccurs="0" />
            <xs:element name="postAcquisitionProcessingBlock" type="PostAcquisitionProcessingBlo
ckType" minOccurs="0" />
                <xs:element name="lossyTransformationAttempts" type="LossyTransformationAttemptsT
ype" minOccurs="0" />
            <xs:element name="cameraToSubjectDistance" type="CameraToSubjectDistanceType"
minOccurs="0" />
            <xs:element name="sensorDiagonal" type="SensorDiagonalType" minOccurs="0" />
            <xs:element name="lensFocalLength" type="LensFocalLengthType" minOccurs="0" />
            <xs:element name="imageSizeBlock" type="ImageSizeBlockType" minOccurs="0" />
            <xs:element name="imageFaceMeasurementsBlock" type="ImageFaceMeasurementsBlockType"
minOccurs="0" />
            <xs:element name="imageColourSpace" type="ImageColourSpaceType" minOccurs="0" />
            <xs:element name="referenceColourMappingBlock" type="ReferenceColourMappingBlockT
ype" minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="FaceImageKind2DCodeType">
        <xs:choice>
            <xs:element name="mrtd" type="xs:int" fixed="0" />
            <xs:element name="generalPurpose" type="xs:int" fixed="1" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="FaceImageKind2DExtensionBlockType">
        <xs:sequence>
            <xs:element name="fallback" type="FaceImageKind2DCodeType" />
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="FaceImageKind2DType">
        <xs:choice>
            <xs:element name="code" type="FaceImageKind2DCodeType" />
            <xs:element name="extensionBlock" type="FaceImageKind2DExtensionBlockType" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="PostAcquisitionProcessingBlockType">
        <xs:sequence>
            <xs:element name="rotated" type="xs:boolean" minOccurs="0" />
            <xs:element name="cropped" type="xs:boolean" minOccurs="0" />
            <xs:element name="downSampled" type="xs:boolean" minOccurs="0" />
            <xs:element name="whiteBalanceAdjusted" type="xs:boolean" minOccurs="0" />
            <xs:element name="multiplyCompressed" type="xs:boolean" minOccurs="0" />
            <xs:element name="interpolated" type="xs:boolean" minOccurs="0" />
            <xs:element name="contrastStretched" type="xs:boolean" minOccurs="0" />
            <xs:element name="poseCorrected" type="xs:boolean" minOccurs="0" />
            <xs:element name="multiViewImage" type="xs:boolean" minOccurs="0" />
            <xs:element name="ageProgressed" type="xs:boolean" minOccurs="0" />
            <xs:element name="superResolutionProcessed" type="xs:boolean" minOccurs="0" />
            <xs:element name="normalised" type="xs:boolean" minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="LossyTransformationAttemptsCodeType">
        <xs:choice>
            <xs:element name="unknown" type="xs:int" fixed="0" />
            <xs:element name="zero" type="xs:int" fixed="1" />
            <xs:element name="one" type="xs:int" fixed="2" />
            <xs:element name="moreThanOne" type="xs:int" fixed="3" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="LossyTransformationAttemptsExtensionBlockType">
        <xs:sequence>
            <xs:element name="fallback" type="LossyTransformationAttemptsCodeType" />
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="LossyTransformationAttemptsType">
        <xs:choice>
            <xs:element name="code" type="LossyTransformationAttemptsCodeType" />
            <xs:element name="extensionBlock" type="LossyTransformationAttemptsExtensionBlockT
ype" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="ImageDataFormatCodeType">
        <xs:choice>
            <xs:element name="unknown" type="xs:int" fixed="0" />
            <xs:element name="other" type="xs:int" fixed="1" />
            <xs:element name="jpeg" type="xs:int" fixed="2" />
            <xs:element name="jpeg2000Lossy" type="xs:int" fixed="3" />
            <xs:element name="jpeg2000Lossless" type="xs:int" fixed="4" />
            <xs:element name="png" type="xs:int" fixed="5" />
            <xs:element name="pgm" type="xs:int" fixed="6" />
            <xs:element name="ppm" type="xs:int" fixed="7" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="ImageDataFormatExtensionBlockType">
        <xs:sequence>
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="ImageDataFormatType">
        <xs:choice>
            <xs:element name="code" type="ImageDataFormatCodeType" />
            <xs:element name="extensionBlock" type="ImageDataFormatExtensionBlockType" />
        </xs:choice>
    </xs:complexType>

ISO/IEC 39794-5:2019(E)

<xs:simpleType name="CameraToSubjectDistanceType">
    <xs:restriction base="xs:unsignedInt">
        <xs:minInclusive value="0" />
        <xs:maxInclusive value="50000" />
    </xs:restriction>
</xs:simpleType>
<xs:simpleType name="SensorDiagonalType">
    <xs:restriction base="xs:unsignedInt">
        <xs:minInclusive value="0" />
        <xs:maxInclusive value="2000" />
    </xs:restriction>
</xs:simpleType>
<xs:simpleType name="LensFocalLengthType">
    <xs:restriction base="xs:unsignedInt">
        <xs:minInclusive value="0" />
        <xs:maxInclusive value="2000" />
    </xs:restriction>
</xs:simpleType>
<xs:complexType name="ImageSizeBlockType">
    <xs:sequence>
        <xs:element name="width" type="ImageSizeType" />
        <xs:element name="height" type="ImageSizeType" />
    </xs:sequence>
</xs:complexType>
<xs:simpleType name="ImageSizeType">
    <xs:restriction base="xs:integer">
        <xs:minInclusive value="0" />
        <xs:maxInclusive value="65535" />
    </xs:restriction>
</xs:simpleType>
<xs:complexType name="ImageFaceMeasurementsBlockType">
    <xs:sequence>
        <xs:element name="imageHeadWidth" type="xs:unsignedInt" minOccurs="0" />
        <xs:element name="imageInterEyeDistance" type="xs:unsignedInt" minOccurs="0" />
        <xs:element name="imageEyeToMouthDistance" type="xs:unsignedInt" minOccurs="0" />
        <xs:element name="imageHeadLength" type="xs:unsignedInt" minOccurs="0" />
        <xs:any namespace="##other" processContents="lax" minOccurs="0" />
    </xs:sequence>
</xs:complexType>
<xs:complexType name="ImageColourSpaceCodeType">
    <xs:choice>
        <xs:element name="unknown" type="xs:int" fixed="0" />
        <xs:element name="other" type="xs:int" fixed="1" />
        <xs:element name="rgb24Bit" type="xs:int" fixed="2" />
        <xs:element name="rgb48Bit" type="xs:int" fixed="3" />
        <xs:element name="yuv422" type="xs:int" fixed="4" />
        <xs:element name="greyscale8Bit" type="xs:int" fixed="5" />
        <xs:element name="greyscale16Bit" type="xs:int" fixed="6" />
    </xs:choice>
</xs:complexType>
<xs:complexType name="ImageColourSpaceExtensionBlockType">
    <xs:sequence>
        <xs:element name="fallback" type="ImageColourSpaceCodeType" />
        <xs:any namespace="##other" processContents="lax" />
    </xs:sequence>
</xs:complexType>
<xs:complexType name="ImageColourSpaceType">
    <xs:choice>
        <xs:element name="code" type="ImageColourSpaceCodeType" />
        <xs:element name="extensionBlock" type="ImageColourSpaceExtensionBlockType" />
    </xs:choice>
</xs:complexType>
    <xs:complexType name="ReferenceColourMappingBlockType">
        <xs:sequence>
                <xs:element name="referenceColourSchema" type="xs:base64Binary" minOccurs="0" />
                <xs:element name="referenceColourDefinitionAndValueBlocks" type="ReferenceColourDefi
nitionAndValueBlocksType" minOccurs="0" />
                <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="ReferenceColourDefinitionAndValueBlocksType">
        <xs:sequence>
                <xs:element name="referenceColourDefinitionAndValueBlock" type="ReferenceColourDefin
itionAndValueBlockType" maxOccurs="unbounded" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="ReferenceColourDefinitionAndValueBlockType">
        <xs:sequence>
            <xs:element name="referenceColourDefinition" type="xs:base64Binary" minOccurs="0" />
            <xs:element name="referenceColourValue" type="xs:base64Binary" minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="ShapeRepresentation3DBlockType">
        <xs:sequence>
            <xs:element name="representationData3D" type="xs:base64Binary" />
            <xs:element name="imageInformation3DBlock" type="ImageInformation3DBlockType" />
            <xs:element name="captureDevice3DBlock" type="CaptureDevice3DBlockType"
minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="CaptureDevice3DBlockType">
        <xs:sequence>
            <xs:element name="modus3D" type="Modus3DType" minOccurs="0" />
            <xs:element name="captureDeviceTechnologyId3D" type="CaptureDeviceTechnologyId3DT
ype" minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="Modus3DCodeType">
        <xs:choice>
            <xs:element name="unknown" type="xs:int" fixed="0" />
            <xs:element name="active" type="xs:int" fixed="1" />
            <xs:element name="passive" type="xs:int" fixed="2" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="Modus3DExtensionBlockType">
        <xs:sequence>
            <xs:element name="fallback" type="Modus3DCodeType" />
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="Modus3DType">
        <xs:choice>
            <xs:element name="code" type="Modus3DCodeType" />
            <xs:element name="extensionBlock" type="Modus3DExtensionBlockType" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="CaptureDeviceTechnologyId3DCodeType">
        <xs:choice>
            <xs:element name="unknown" type="xs:int" fixed="0" />
            <xs:element name="stereoscopicScanner" type="xs:int" fixed="1" />
            <xs:element name="movingLaserLine" type="xs:int" fixed="2" />
            <xs:element name="structuredLight" type="xs:int" fixed="3" />

ISO/IEC 39794-5:2019(E)

        <xs:element name="colourCodedLight" type="xs:int" fixed="4" />
        <xs:element name="timeOfFlight" type="xs:int" fixed="5" />
        <xs:element name="shapeFromShading" type="xs:int" fixed="6" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="CaptureDeviceTechnologyId3DExtensionBlockType">
        <xs:sequence>
            <xs:element name="fallback" type="CaptureDeviceTechnologyId3DCodeType" />
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="CaptureDeviceTechnologyId3DType">
        <xs:choice>
            <xs:element name="code" type="CaptureDeviceTechnologyId3DCodeType" />
            <xs:element name="extensionBlock" type="CaptureDeviceTechnologyId3DExtensionBlockT
ype" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="ImageInformation3DBlockType">
        <xs:sequence>
            <xs:element name="representationKind3D" type="RepresentationKind3DType" />
            <xs:element name="coordinateSystem3D" type="CoordinateSystem3DType" />
            <xs:element name="cartesianScalesAndOffsets3DBlock" type="CartesianScalesAndOffsets3
DBlockType" />
            <xs:element name="imageColourSpace" type="ImageColourSpaceType" minOccurs="0" />
            <xs:element name="faceImageKind3D" type="FaceImageKind3DType" minOccurs="0" />
            <xs:element name="imageSizeBlock" type="ImageSizeBlockType" minOccurs="0" />
            <xs:element name="physicalFaceMeasurements3DBlock" type="PhysicalFaceMeasurements3DBl
ockType" minOccurs="0" />
            <xs:element name="postAcquisitionProcessingBlock" type="PostAcquisitionProcessingBlo
ckType" minOccurs="0" />
            <xs:element name="texturedImageResolution3DBlock" type="TexturedImageResolution3DBlo
ckType" minOccurs="0" />
            <xs:element name="textureMap3DBlock" type="TextureMap3DBlockType" minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="RepresentationKind3DBaseType">
        <xs:choice>
            <xs:element name="vertex3DBlock" type="Vertex3DBlockType" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="RepresentationKind3DExtensionBlockType">
        <xs:sequence>
            <xs:any namespace="##other" processContents="lax"/>
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="RepresentationKind3DType">
        <xs:choice>
            <xs:element name="base" type="RepresentationKind3DBaseType"/>
            <xs:element name="extensionBlock" type="RepresentationKind3DExtensionBlockType"/>
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="Vertex3DBlockType">
        <xs:sequence>
            <xs:element name="vertexInformation3DBlocks" type="VertexInformation3DBlocksType"
minOccurs="0" />
            <xs:element name="vertexTriangleData3DBlocks" type="VertexTriangleData3DBlocksType"
minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="VertexInformation3DBlocksType">
        <xs:sequence>
            <xs:element name="vertexInformation3DBlock" type="VertexInformation3DBlockType"
maxOccurs="unbounded" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="VertexInformation3DBlockType">
        <xs:sequence>
            <xs:element name="vertexCoordinates3DBlock" type="cmn:CoordinateCartesian3DUnsignedS
hortBlockType" />
            <xs:element name="vertexId3D" type="xs:unsignedInt" minOccurs="0" />
            <xs:element name="vertexNormals3DBlock" type="cmn:CoordinateCartesian3DUnsignedShort
BlockType" minOccurs="0" />
            <xs:element name="vertexTextures3DBlock" type="cmn:CoordinateCartesian2DUnsignedShor
tBlockType" minOccurs="0" />
            <xs:element name="errorMap3D" type="xs:base64Binary" minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="VertexTriangleData3DBlocksType">
        <xs:sequence>
            <xs:element name="vertexTriangleData3DBlock" type="VertexTriangleData3DBlockType"
maxOccurs="unbounded" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="VertexTriangleData3DBlockType">
        <xs:sequence>
            <xs:element name="triangleIndex1" type="xs:unsignedInt" />
            <xs:element name="triangleIndex2" type="xs:unsignedInt" />
            <xs:element name="triangleIndex3" type="xs:unsignedInt" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="CoordinateSystem3DCodeType">
        <xs:choice>
            <xs:element name="cartesianCoordinateSystem3D" type="xs:int" fixed="0" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="CoordinateSystem3DExtensionBlockType">
        <xs:sequence>
            <xs:element name="fallback" type="CoordinateSystem3DCodeType" />
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="CoordinateSystem3DType">
        <xs:choice>
            <xs:element name="code" type="CoordinateSystem3DCodeType" />
            <xs:element name="extensionBlock" type="CoordinateSystem3DExtensionBlockType" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="CartesianScalesAndOffsets3DBlockType">
        <xs:sequence>
            <xs:element name="scaleX" type="xs:decimal" />
            <xs:element name="scaleY" type="xs:decimal" />
            <xs:element name="scaleZ" type="xs:decimal" />
            <xs:element name="offsetX" type="xs:decimal" />
            <xs:element name="offsetY" type="xs:decimal" />
            <xs:element name="offsetZ" type="xs:decimal" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="FaceImageKind3DCodeType">
        <xs:choice>
            <xs:element name="texturedFaceImage3d" type="xs:int" fixed="0" />
        </xs:choice>
    </xs:complexType>

ISO/IEC 39794-5:2019(E)

<xs:complexType name="FaceImageKind3DExtensionBlockType">
    <xs:sequence>
        <xs:element name="fallback" type="FaceImageKind3DCodeType" />
        <xs:any namespace="##other" processContents="lax" />
    </xs:sequence>
</xs:complexType>
<xs:complexType name="FaceImageKind3DType">
    <xs:choice>
        <xs:element name="code" type="FaceImageKind3DCodeType" />
        <xs:element name="extensionBlock" type="FaceImageKind3DExtensionBlockType" />
    </xs:choice>
</xs:complexType>
<xs:complexType name="PhysicalFaceMeasurements3DBlockType">
    <xs:sequence>
        <xs:element name="physicalHeadWidth3D" type="xs:int" minOccurs="0" />
        <xs:element name="physicalInterEyeDistance3D" type="xs:int" minOccurs="0" />
        <xs:element name="physicalEyeToMouthDistance3D" type="xs:int" minOccurs="0" />
        <xs:element name="physicalHeadLength3D" type="xs:int" minOccurs="0" />
        <xs:any namespace="##other" processContents="lax" minOccurs="0" />
    </xs:sequence>
</xs:complexType>
<xs:complexType name="TexturedImageResolution3DBlockType">
    <xs:sequence>
        <xs:element name="mMShapeXResolution3D" type="xs:decimal" minOccurs="0" />
        <xs:element name="mMShapeYResolution3D" type="xs:decimal" minOccurs="0" />
        <xs:element name="mMShapeZResolution3D" type="xs:decimal" minOccurs="0" />
        <xs:element name="mMTextureResolution3D" type="xs:decimal" minOccurs="0" />
        <xs:element name="textureAcquisitionPeriod3D" type="xs:decimal" minOccurs="0" />
        <xs:element name="faceAreaScanned3DBlock" type="FaceAreaScanned3DBlockType"
minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="FaceAreaScanned3DBlockType">
        <xs:sequence>
            <xs:element name="frontOfTheHead" type="xs:boolean" minOccurs="0" />
            <xs:element name="chin" type="xs:boolean" minOccurs="0" />
            <xs:element name="ears" type="xs:boolean" minOccurs="0" />
            <xs:element name="neck" type="xs:boolean" minOccurs="0" />
            <xs:element name="backOfTheHead" type="xs:boolean" minOccurs="0" />
            <xs:element name="fullHead" type="xs:boolean" minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="TextureMap3DBlockType">
        <xs:sequence>
            <xs:element name="textureMapData3D" type="xs:base64Binary" />
            <xs:element name="imageDataFormat" type="ImageDataFormatType" />
            <xs:element name="textureCaptureDeviceSpectral3D" type="TextureCaptureDeviceSpectral
3DType" minOccurs="0" />
            <xs:element name="textureStandardIlluminant3D" type="TextureStandardIlluminant3DT
ype" minOccurs="0" />
            <xs:element name="errorMap3D" type="xs:base64Binary" minOccurs="0" />
            <xs:any namespace="##other" processContents="lax" minOccurs="0" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="TextureCaptureDeviceSpectral3DCodeType">
        <xs:choice>
            <xs:element name="unknown" type="xs:int" fixed="0" />
            <xs:element name="other" type="xs:int" fixed="1" />
            <xs:element name="white" type="xs:int" fixed="2" />
            <xs:element name="veryNearInfrared" type="xs:int" fixed="3" />
            <xs:element name="shortWaveInfrared" type="xs:int" fixed="4" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="TextureCaptureDeviceSpectral3DExtensionBlockType">
        <xs:sequence>
            <xs:element name="fallback" type="TextureCaptureDeviceSpectral3DCodeType" />
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="TextureCaptureDeviceSpectral3DType">
        <xs:choice>
            <xs:element name="code" type="TextureCaptureDeviceSpectral3DCodeType" />
            <xs:element name="extensionBlock" type="TextureCaptureDeviceSpectral3DExtensionBlock
Type" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="TextureStandardIlluminant3DCodeType">
        <xs:choice>
            <xs:element name="d30" type="xs:int" fixed="0" />
            <xs:element name="d35" type="xs:int" fixed="1" />
            <xs:element name="d40" type="xs:int" fixed="2" />
            <xs:element name="d45" type="xs:int" fixed="3" />
            <xs:element name="d50" type="xs:int" fixed="4" />
            <xs:element name="d55" type="xs:int" fixed="5" />
            <xs:element name="d60" type="xs:int" fixed="6" />
            <xs:element name="d65" type="xs:int" fixed="7" />
            <xs:element name="d70" type="xs:int" fixed="8" />
            <xs:element name="d75" type="xs:int" fixed="9" />
            <xs:element name="d80" type="xs:int" fixed="10" />
        </xs:choice>
    </xs:complexType>
    <xs:complexType name="TextureStandardIlluminant3DExtensionBlockType">
        <xs:sequence>
            <xs:element name="fallback" type="TextureStandardIlluminant3DCodeType" />
            <xs:any namespace="##other" processContents="lax" />
        </xs:sequence>
    </xs:complexType>
    <xs:complexType name="TextureStandardIlluminant3DType">
        <xs:choice>
            <xs:element name="code" type="TextureStandardIlluminant3DCodeType" />
            <xs:element name="extensionBlock" type ="TextureStandardIlluminant3DExtensionBlockT
ype" />
        </xs:choice>
    </xs:complexType>
</xs:schema>

Annex B (informative)
附錄 B (資料性)

Encoding examples  編碼範例

B. 1 Binary encoding example
B. 1 二進位編碼範例

A binary TLV encoding example based on the ASN. 1 schema in Annex A. 1 is given below. This example encoding is available at http://standards.iso.org/iso-iec/39794/-5/ed-1/en.
以下提供基於附錄 A.1 中 ASN.1 架構的二進位 TLV 編碼範例。此編碼範例可於 http://standards.iso.org/iso-iec/39794/-5/ed-1/en 取得。
<65 0D 89 B8 07 8 01 03 81 02    E3 A1 83 OD A3 AA 30 83 OD A3 A5>
7. 81 02 07 E3>  《ISO IEC 39794-5-2019》
<A0 07 5 5 80
<80
<81
02
10
02 10| 02 | | :--- | | 10 |
.
<A1 89 AA 83 OD A3 A5 80 01 01 A1 83 OD A2 97 A0 83 OD A2 92 A0>
<30 0D A 5 01 QUE A1 NCE OD A2 97 A0 83 OD A. 2 92 A0 83 OD A2 8D 80>
<80 01> qquad\qquad
[H]H
[0] 01
A0 83
OD
A2
8D
80
83
OD
A2
75
FF
D8
FF
E0> qquad\qquad
A0 83 OD A2 8D 80 83 OD A2 75 FF D8 FF E0> qquad| A0 83 | | :--- | | OD | | A2 | | 8D | | 80 | | 83 | | OD | | A2 | | 75 | | FF | | D8 | | FF | | E0> $\qquad$ |
<A1 89 97 qquad\qquad 0D A2
<A0 0D A2 92 A0 83 0D A2 8D 8
<A0 OD 83 OD A2 75 FE D8 FF E0 00 10 4A 46 49 46 00 01 01 00>
<80 0D 75 D8 FF E0 00 10 4A 46 49 46 00 01 01 00 00 01 00 01 00>
89 [0]
:
F
D8
FF
E0
00
10
4A
46
49
46
00
01
01
00
00
F D8 FF E0 00 10 4A 46 49 46 00 01 01 00 00| F | | :--- | | D8 | | FF | | E0 | | 00 | | 10 | | 4A | | 46 | | 49 | | 46 | | 00 | | 01 | | 01 | | 00 | | 00 |
: 00 0 01 00 00 FE DB 00 43 00 01 01 01 01 01 01 01
: 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01
: 01 01 01 01 01 01 01 01 01 01 01 01 01 02 02 01
: 01 02 01 01 01 02 02 02 02 02 02 02 02 02 01 02
: 02 02 02 02 02 02 02 02 02 FE DB 00 43 01 01 01
: 01 01 01 01 01 01 01 01 02 01 01 01 02 02 02 02
: 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02
: [ Another 893429 bytes skipped ]
[ 略過另外 893429 位元組 ]
<A1 0A A0 80 02 A1 03 80 01 01>
893604
[1]
{
[1] {| [1] | | :--- | | { |
<A0 038001 02>
893606 [0] {
<80 01 02>
893608 1: [0] 02
: }
<A1 038001 01>
893611 3: [1] {
<80 01 01>
893613 1: [0] 01
}
}
<A2 A1 80 01>
893616 5: [2] {
<A1 038001 01>
893618 3: [1] {
<80 01>
893620 1: [0] 01
: }
<65 0D 89 B8 07 8 01 03 81 02 楼 E3 A1 83 OD A3 AA 30 83 OD A3 A5> 7. 81 02 07 E3> <A0 07 5 5 80 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-088.jpg?height=160&width=153&top_left_y=4130&top_left_x=1242 <80 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-088.jpg?height=132&width=73&top_left_y=4463&top_left_x=539 <81 "02 10" . https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-088.jpg?height=211&width=59&top_left_y=4826&top_left_x=1409 <A1 89 AA 83 OD A3 A5 80 01 01 A1 83 OD A2 97 A0 83 OD A2 92 A0> <30 0D A 5 01 QUE A1 NCE OD A2 97 A0 83 OD A. 2 92 A0 83 OD A2 8D 80> <80 01> qquad [H] https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-088.jpg?height=147&width=168&top_left_y=5674&top_left_x=1532 [0] 01 "A0 83 OD A2 8D 80 83 OD A2 75 FF D8 FF E0> qquad" <A1 89 97 qquad 0D A2 <A0 0D A2 92 A0 83 0D A2 8D 8 <A0 OD 83 OD A2 75 FE D8 FF E0 00 10 4A 46 49 46 00 01 01 00> <80 0D 75 D8 FF E0 00 10 4A 46 49 46 00 01 01 00 00 01 00 01 00> 89 [0] : "F D8 FF E0 00 10 4A 46 49 46 00 01 01 00 00" : 00 0 01 00 00 FE DB 00 43 00 01 01 01 01 01 01 01 : 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 : 01 01 01 01 01 01 01 01 01 01 01 01 01 02 02 01 : 01 02 01 01 01 02 02 02 02 02 02 02 02 02 01 02 : 02 02 02 02 02 02 02 02 02 FE DB 00 43 01 01 01 : 01 01 01 01 01 01 01 01 02 01 01 01 02 02 02 02 : 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 : [ Another 893429 bytes skipped ] <A1 0A A0 80 02 A1 03 80 01 01> 893604 "[1] {" <A0 038001 02> 893606 [0] { <80 01 02> 893608 1: [0] 02 : } <A1 038001 01> 893611 3: [1] { <80 01 01> 893613 1: [0] 01 } } <A2 A1 80 01> 893616 5: [2] { <A1 038001 01> 893618 3: [1] { <80 01> 893620 1: [0] 01 : } | <65 | 0D 89 | B8 | 07 | 8 | 01 | 03 | 81 | | 02 | 楼 | E3 | A1 | 83 | OD | A3 | AA | 30 | 83 | OD | A3 | A5> | | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | | 7. | 81 | 02 | 07 | E3> | | | | | | | | | | | | | | | | | | | <A0 07 5 5 | 80 | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-088.jpg?height=160&width=153&top_left_y=4130&top_left_x=1242) | | | | | | | | | | | | | | | | | | | | | | <80 | | | | | | | | | | | | | | | | | | | | | | | | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-088.jpg?height=132&width=73&top_left_y=4463&top_left_x=539) | | | | | | | | | | | | | | | | | | | | | | | <81 | | | | | | | | | | | | | | | | | | | | | | | | 02 <br> 10 | . | | | | | | | | | | | | | | | | | | | | | | | | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-088.jpg?height=211&width=59&top_left_y=4826&top_left_x=1409) | | | | | | | | | | | | | | | | | | | | | | <A1 | 89 | AA | 83 | OD | A3 | A5 | 80 | | 01 | 01 | A1 | 83 | OD | A2 | 97 | A0 | 83 | OD | A2 | 92 | A0> | | | <30 | 0D | A 5 | 01 | QUE | A1 | NCE | OD | | A2 | 97 | A0 | 83 | OD | A. 2 | 92 | A0 | 83 | OD | A2 | 8D | 80> | | | <80 | 01> $\qquad$ | <smiles>[H]</smiles> | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-088.jpg?height=147&width=168&top_left_y=5674&top_left_x=1532) | [0] 01 | | | A0 83 <br> OD <br> A2 <br> 8D <br> 80 <br> 83 <br> OD <br> A2 <br> 75 <br> FF <br> D8 <br> FF <br> E0> $\qquad$ | | | | | | | | | | | | | | | | | <A1 | 89 | 97 | $\qquad$ | 0D | A2 | | | | | <A0 | 0D A2 92 | A0 | 83 | 0D | A2 | 8D | | | | | | | | | | | | | | | | | 8 | | | | | | | | | | | | | | | | | <A0 | OD | | 83 | OD | A2 | 75 | FE | | D8 | FF | E0 | 00 | 10 | 4A | 46 | 49 | 46 | 00 | 01 | 01 | 00> | | | <80 | 0D | 75 | D8 | FF | E0 | 00 | 10 | | 4A | 46 | 49 | 46 | 00 | 01 | 01 | 00 | 00 | 01 | 00 | 01 | 00> | | | | 89 | | [0] | | | | | | | | | | | | | | | | | | | | | | : | F <br> D8 <br> FF <br> E0 <br> 00 <br> 10 <br> 4A <br> 46 <br> 49 <br> 46 <br> 00 <br> 01 <br> 01 <br> 00 <br> 00 | | | | | | | | | | | | | | | | | | | | | | | : | | | | | 00 | 0 | 01 | 00 | 00 | FE | DB | 00 | 43 | 00 | 01 | 01 | 01 | 01 | 01 | 01 | 01 | | | : | | | | | 01 | 01 | | 01 | 01 | 01 | 01 | 01 | 01 | 01 | 01 | 01 | 01 | 01 | 01 | 01 | 01 | | | : | | | | | 01 | 01 | | 01 | 01 | 01 | 01 | 01 | 01 | 01 | 01 | 01 | 01 | 01 | 02 | 02 | 01 | | | : | | | | | 01 | | 02 | 01 | 01 | 01 | 02 | 02 | 02 | 02 | 02 | 02 | 02 | 02 | 02 | 01 | 02 | | | : | | | | | 02 | | 02 | 02 | 02 | 02 | 02 | 02 | 02 | 02 | FE | DB | 00 | 43 | 01 | 01 | 01 | | | : | | | | | 01 | | 01 | 01 | 01 | 01 | 01 | 01 | 01 | 02 | 01 | 01 | 01 | 02 | 02 | 02 | 02 | | | : | | | | | 02 | | 02 | 02 | 02 | 02 | 02 | 02 | 02 | 02 | 02 | 02 | 02 | 02 | 02 | 02 | 02 | | | : | [ Another 893429 bytes skipped ] | | | | | | | | | | | | | | | | | | | | | | <A1 0A | A0 | 80 | 02 | A1 | 03 | 80 | 01 | | 01> | | | | | | | | | | | | | | | 893604 | | [1] <br> { | | | | | | | | | | | | | | | | | | | | | | <A0 038001 02> | | | | | | | | | | | | | | | | | | | | | | | | 893606 | | [0] { | | | | | | | | | | | | | | | | | | | | | | <80 01 02> | | | | | | | | | | | | | | | | | | | | | | | | 893608 | 1: | | [0] 02 | | | | | | | | | | | | | | | | | | | | | : | | | } | | | | | | | | | | | | | | | | | | | | | <A1 038001 01> | | | | | | | | | | | | | | | | | | | | | | | | 893611 | 3: | | [1] { | | | | | | | | | | | | | | | | | | | | | <80 01 01> | | | | | | | | | | | | | | | | | | | | | | | | 893613 | 1: | | [0] 01 | | | | | | | | | | | | | | | | | | | | | } | | | | | | | | | | | | | | | | | | | | | | | | } | | | | | | | | | | | | | | | | | | | | | | | | <A2 | A1 | 80 | 01> | | | | | | | | | | | | | | | | | | | | | 893616 | 5: | | | | | [2] | { | | | | | | | | | | | | | | | | | <A1 038001 01> | | | | | | | | | | | | | | | | | | | | | | | | 893618 | 3: | | | | | [1] | | { | | | | | | | | | | | | | | | | <80 | 01> | | | | | | | | | | | | | | | | | | | | | | | 893620 | 1: | | | | | | [0] | | 01 | | | | | | | | | | | | | | | | : | | | | | | } | | | | | | | | | | | | | | | |
: }
: }
: }
: }
<A2 0A 80 0207 E3 8101 07
10: [2] {
<80 02
893625 2: [0] 07
<81 01 893629 07>
1: [1] 07
<82 01
08>
<82 01 08>| <82 01 | | :--- | | 08> |
893632 1: [2] 08 }
<A3 23 30 OE A0 07 01 2A A1 80 01 32 30 11 A0 80 02>
893635 35: [3] {
<30 0E A0 07 01 81 02 80 32 > 32 > 32 >32>
893637 14: S EQU
<A0 07 80 01 81 12 67>
893639 7: [0]
<80 01
893641 1: [0]
< 8102 < 8102 < 8102<8102 12 67>
893644
\square [1] 12 }
<A1 03 893648 80 01
[1
< 8001 < 8001 < 8001<8001 32>
893650 qquad\qquad qquad\qquad 1: qquad\qquad [0]
\square
:
<30 11 A0 08 02 3F 8 05 03 80 01 00>
893653
SEQUENCE {
<A0 08 80
3F 02
8:
06
< 8002 < 8002 < 8002<8002 1E 3F>
893657
< 8102 < 8102 < 8102<8102 06 72>
893661 2
: } : } : } : } <A2 0A 80 0207 E3 8101 07 10: [2] { <80 02 893625 2: [0] 07 <81 01 893629 07> 1: [1] 07 "<82 01 08>" 893632 1: [2] 08 } <A3 23 30 OE A0 07 01 2A A1 80 01 32 30 11 A0 80 02> 893635 35: [3] { <30 0E A0 07 01 81 02 80 32 > 893637 14: S EQU <A0 07 80 01 81 12 67> 893639 7: [0] <80 01 893641 1: [0] < 8102 12 67> 893644 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-089.jpg?height=131&width=131&top_left_y=4162&top_left_x=1255 ◻ [1] 12 } <A1 03 893648 80 01 [1 < 8001 32> 893650 qquad qquad 1: qquad [0] https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-089.jpg?height=254&width=153&top_left_y=5017&top_left_x=1378 ◻ : https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-089.jpg?height=123&width=200&top_left_y=5155&top_left_x=1744 <30 11 A0 08 02 3F 8 05 03 80 01 00> 893653 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-089.jpg?height=154&width=101&top_left_y=5392&top_left_x=1316 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-089.jpg?height=92&width=85&top_left_y=5392&top_left_x=2118 SEQUENCE { <A0 08 80 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-089.jpg?height=276&width=253&top_left_y=5537&top_left_x=1248 3F 02 8: https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-089.jpg?height=55&width=154&top_left_y=5735&top_left_x=1996 06 < 8002 1E 3F> 893657 < 8102 06 72> 893661 2 | | | : | } | | | | | | | | | | | | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | | | : | } | | | | | | | | | | | | | | | : | } | | | | | | | | | | | | | | | : | } | | | | | | | | | | | | | <A2 0A | 80 | 0207 | E3 | 8101 | 07 | | | | | | | | | | | | | 10: | | [2] | { | | | | | | | | | | | <80 02 | | | | | | | | | | | | | | | | 893625 | | 2: | | | [0] 07 | | | | | | | | | | | <81 01 893629 | 07> | | | | | | | | | | | | | | | | | 1: | | | [1] 07 | | | | | | | | | | | <82 01 <br> 08> | | | | | | | | | | | | | | | | 893632 | | 1: | | | [2] 08 } | | | | | | | | | | | <A3 23 | 30 | OE A0 | 07 | 01 | 2A | A1 | 80 | 01 | 32 | 30 | 11 | A0 | 80 | 02> | | 893635 | | 35: | | [3] | { | | | | | | | | | | | <30 0E | A0 | 07 | 01 | 81 | 02 | 80 | $32>$ | | | | | | | | | 893637 | | 14: | | S | EQU | | | | | | | | | | | <A0 07 | 80 | 01 | 81 | 12 | 67> | | | | | | | | | | | 893639 | | 7: | | | [0] | | | | | | | | | | | <80 01 | | | | | | | | | | | | | | | | 893641 | | 1: | | | [0] | | | | | | | | | | | $<8102$ | 12 | 67> | | | | | | | | | | | | | | 893644 | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-089.jpg?height=131&width=131&top_left_y=4162&top_left_x=1255) | | $\square$ | | [1] 12 } | | | | | | | | | | | | | | | | | | | | | | | | | | | <A1 03 893648 | 80 | 01 | | | | | | | | | | | | | | | | | | | [1 | | | | | | | | | | | $<8001$ 32> | | | | | | | | | | | | | | | | 893650 | $\qquad$ $\qquad$ | 1: | $\qquad$ | | [0] | | | | | | | | | | | | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-089.jpg?height=254&width=153&top_left_y=5017&top_left_x=1378) | | $\square$ | | | | | | | | | | | | | | : | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-089.jpg?height=123&width=200&top_left_y=5155&top_left_x=1744) | | | | | | | | | | | | | <30 11 | A0 | 08 | 02 | 3F | 8 | 05 | 03 | 80 | 01 | 00> | | | | | | 893653 | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-089.jpg?height=154&width=101&top_left_y=5392&top_left_x=1316) | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-089.jpg?height=92&width=85&top_left_y=5392&top_left_x=2118) | | SEQUENCE { | | | | | | | | | | | <A0 08 | 80 | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-089.jpg?height=276&width=253&top_left_y=5537&top_left_x=1248) | 3F | 02 | | | | | | | | | | | | | | 8: | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-089.jpg?height=55&width=154&top_left_y=5735&top_left_x=1996) | | 06 | | | | | | | | | | | $<8002$ | 1E | 3F> | | | | | | | | | | | | | | 893657 | | | | | | | | | | | | | | | | $<8102$ | 06 | 72> | | | | | | | | | | | | | | 893661 | | 2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |

ISO/IEC 39794-5:2019(E)

:
:
:
: : :| : | | :--- | | : | | : |
}
<A3 03 893697 80 01
3: [3] {
<80 01 00>
893699 1: [0] 00
: }
<A4 03 8001 04>
893702 3: [4] {
<80 01 04>
893704
1:
:
1: :| 1: | | :--- | | : |
<85 01  <85 01> 64>
893707 1:
<A6 038001 02>
893710 3: [6] {
<80 01 02>
893712 1: [0] 02
: }
< 8713 < 8713 < 8713<8713 50 4144 20 70 61 6D 65 74 65 72 20 62 79 65
893715 1 19: [7] 'PAD parameter bytes'
[7] 'PAD 參數字節'
<A8 15 04 1350 41 44 68 61 6C 6C 65 6E 67 65 20 79 65 73>
893736 21: [8] {
<04 13 50 4144 20 63 6C 6C 65 6E 67 65 20 62 79 65
893738 : }
: }
<85 0306 8E C3>
893759 3: [5] 06 8E C3
<A7 0B A1 0930078001 2A 8102 12 67>
893764 11: [7]  《ISO IEC 39794-5-2019》
<A1 09
30
078001
2A
81
02
12
67>
<A1 09 30 078001 2A 81 02 12 67>| <A1 09 | | :--- | | 30 | | 078001 | | 2A | | 81 | | 02 | | 12 | | 67> |
893766 9: [
<30 07  <30 07> 80 01 81 02 67>
893768
SEQUENCE {  序列 {
<80 01 2A>
893770
[0]
2A
[0] 2A
[0] 2A [0] 2A| [0] | | :--- | | 2A | | [0] 2A |
< 8102 < 8102 < 8102<8102 12 67> [1] 1267
893773 2 [1] 1267
:
.
}
: . }| : | | :--- | | . | | } |
: }
<A8 2E  <A8 2E> A0 03 01 02 80 01 07 A2 03 80 01 05 02 00 A4 82>
46: [8]
<A0 03 01
893779 3: [0] {
<80 01 02>
893781 qquad\qquad 1: [0] 02
: }
<A1 038001 07>
893784 3: [1] {
<80 01 07>
893786 1 [0] 07
:
<A2 03 80 01
893789 3 [2] {
<80 01 05>
893791 1 [0] 05
:
< 8302 < 8302 < 8302<8302 893794 00 B4>
2: [3] 00 B4
<A4 03
893798 3: [4] {
<82 01 00>
1: 00
:
81 01
<A5 03 893803 <81 01 893805 3: [5] FF
FE>
1
1:
1 1:| 1 | | :--- | | 1: |
": : :" } <A3 03 893697 80 01 3: [3] { <80 01 00> 893699 1: [0] 00 : } <A4 03 8001 04> 893702 3: [4] { <80 01 04> 893704 "1: :" <85 01 64> 893707 1: <A6 038001 02> 893710 3: [6] { <80 01 02> 893712 1: [0] 02 : } < 8713 50 4144 20 70 61 6D 65 74 65 72 20 62 79 65 893715 1 19: [7] 'PAD parameter bytes' <A8 15 04 1350 41 44 68 61 6C 6C 65 6E 67 65 20 79 65 73> 893736 21: [8] { <04 13 50 4144 20 63 6C 6C 65 6E 67 65 20 62 79 65 893738 都 : } : } <85 0306 8E C3> 893759 3: [5] 06 8E C3 <A7 0B A1 0930078001 2A 8102 12 67> 893764 11: [7] "<A1 09 30 078001 2A 81 02 12 67>" 893766 9: [ <30 07 80 01 81 02 67> 893768 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-090.jpg?height=108&width=139&top_left_y=5976&top_left_x=882 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-090.jpg?height=131&width=192&top_left_y=5976&top_left_x=1714 SEQUENCE { <80 01 2A> 893770 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-090.jpg?height=138&width=230&top_left_y=6236&top_left_x=791 "[0] 2A [0] 2A" < 8102 12 67> [1] 1267 893773 2 [1] 1267 ": . }" : } <A8 2E A0 03 01 02 80 01 07 A2 03 80 01 05 02 00 A4 82> 46: [8] <A0 03 01 893779 3: [0] { <80 01 02> 893781 qquad 1: [0] 02 : } <A1 038001 07> 893784 3: [1] { <80 01 07> 893786 1 [0] 07 : <A2 03 80 01 893789 3 [2] { <80 01 05> 893791 1 [0] 05 : < 8302 893794 00 B4> 2: [3] 00 B4 <A4 03 893798 3: [4] { <82 01 00> 1: 00 : 81 01 <A5 03 893803 <81 01 893805 3: [5] FF FE> "1 1:" https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-090.jpg?height=123&width=4114&top_left_y=10783&top_left_x=1519 | | : <br> : <br> : | | } | | | | | | | | | | | | | | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | <A3 03 893697 | 80 | 01 | | | | | | | | | | | | | | | | | | 3: | [3] { | | | | | | | | | | | | | | | <80 01 | 00> | | | | | | | | | | | | | | | | | 893699 | | 1: | [0] 00 | | | | | | | | | | | | | | | | | : | } | | | | | | | | | | | | | | | <A4 03 | | 8001 04> | | | | | | | | | | | | | | | | 893702 | | 3: | [4] { | | | | | | | | | | | | | | | <80 01 | 04> | | | | | | | | | | | | | | | | | 893704 | 1: <br> : | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | <85 01 | 64> | | | | | | | | | | | | | | | | | 893707 | 1: | | | | | | | | | | | | | | | | | <A6 038001 02> | | | | | | | | | | | | | | | | | | 893710 | 3: | | [6] { | | | | | | | | | | | | | | | <80 01 02> | | | | | | | | | | | | | | | | | | 893712 | | 1: | [0] 02 | | | | | | | | | | | | | | | | | : | } | | | | | | | | | | | | | | | $<8713$ | 50 | 4144 | 20 | 70 | 61 | 6D | 65 | 74 | 65 | 72 | 20 | 62 | 79 | 65 | | | | 893715 | 1 | 19: | | | | | [7] 'PAD parameter bytes' | | | | | | | | | | | <A8 15 | 04 | 1350 | 41 | 44 | 68 | 61 | 6C | 6C | 65 | 6E | 67 | 65 | 20 | 79 | 65 | 73> | | 893736 | | 21: | [8] { | | | | | | | | | | | | | | | <04 13 | 50 | 4144 | 20 | 63 | 6C | 6C | 65 | 6E | 67 | 65 | 20 | 62 | 79 | 65 | | | | 893738 | 都 | : | } | | | | | | | | | | | | | | | | | : | } | | | | | | | | | | | | | | | <85 0306 8E C3> | | | | | | | | | | | | | | | | | | 893759 | | 3: | [5] 06 8E C3 | | | | | | | | | | | | | | | <A7 0B A1 0930078001 2A 8102 12 67> | | | | | | | | | | | | | | | | | | 893764 | 11: | | | [7] | | | | | | | | | | | | | | <A1 09 <br> 30 <br> 078001 <br> 2A <br> 81 <br> 02 <br> 12 <br> 67> | | | | | | | | | | | | | | | | | | 893766 | | 9: | | [ | | | | | | | | | | | | | | <30 07 | 80 | 01 | 81 | 02 | 67> | | | | | | | | | | | | | 893768 | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-090.jpg?height=108&width=139&top_left_y=5976&top_left_x=882) | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-090.jpg?height=131&width=192&top_left_y=5976&top_left_x=1714) | SEQUENCE { | | | | | | | | | | | | | <80 01 | 2A> | | | | | | | | | | | | | | | | | 893770 | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-090.jpg?height=138&width=230&top_left_y=6236&top_left_x=791) | | [0] <br> 2A <br> [0] 2A | | | | | | | | | | | | | | | $<8102$ | 12 | 67> | [1] 1267 | | | | | | | | | | | | | | | 893773 | | 2 | [1] 1267 | | | | | | | | | | | | | | | : <br> . <br> } | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | : | | } | | | | | | | | | | | | | | <A8 2E | A0 | 03 | 01 | 02 | 80 | 01 | 07 | A2 | 03 | 80 | 01 | 05 | 02 | 00 | A4 | 82> | | | | 46: | | [8] | | | | | | | | | | | | | | <A0 03 | | 01 | | | | | | | | | | | | | | | | 893779 | | 3: | | [0] { | | | | | | | | | | | | | | <80 01 02> | | | | | | | | | | | | | | | | | | 893781 | $\qquad$ | 1: | [0] 02 | | | | | | | | | | | | | | | | | : | } | | | | | | | | | | | | | | | <A1 038001 07> | | | | | | | | | | | | | | | | | | 893784 | | 3: | [1] { | | | | | | | | | | | | | | | <80 01 07> | | | | | | | | | | | | | | | | | | 893786 | | 1 | [0] 07 | | | | | | | | | | | | | | | | | : | | | | | | | | | | | | | | | | <A2 03 | 80 | 01 | | | | | | | | | | | | | | | | 893789 | | 3 | | [2] { | | | | | | | | | | | | | | <80 01 05> | | | | | | | | | | | | | | | | | | 893791 | | 1 | | | [0] 05 | | | | | | | | | | | | | | | : | | | | | | | | | | | | | | | | $<8302$ 893794 | 00 | B4> | | | | | | | | | | | | | | | | | | 2: | | [3] | 00 | B4 | | | | | | | | | | | | <A4 03 | | | | | | | | | | | | | | | | | | 893798 | | 3: | | [4] { | | | | | | | | | | | | | | <82 01 | 00> | | | | | | | | | | | | | | | | | | | 1: | | | 00 | | | | | | | | | | | | | | | : | | | | | | | | | | | | | | | | | 81 | 01 | | | | | | | | | | | | | | | | <A5 03 893803 <81 01 893805 | | 3: | | [5] | FF | | | | | | | | | | | | | | FE> | | | | | | | | | | | | | | | | | | 1 <br> 1: | | | | | | | | | | | | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-090.jpg?height=123&width=4114&top_left_y=10783&top_left_x=1519) | | |
}
<A6 0F A0 03 80 01 00 A 03 80 01 0F A2 03 80 1E>
893808 15: [6] {
<A0 03 80 01 00>  《ISO IEC 39794-5-2019》
893810 3: [0] {
< 8001 < 8001 < 8001<8001 00>
893812 1: [0] 00
}
<A1 03  《ISO IEC 39794-5-2019》 80 01 0F>
893815 3: [1] {
<80 01 OF>
893817 1: [0] 0F
}
<A2 03 80 01 1E>
893820 3: qquad\qquad [2] {
< 8001 < 8001 < 8001<8001 1E>
09 [0] 1E }
}
-
A0
09
A1
07
A0
05
A0
03
80
01
15
A1
0C
A0
0A
A0
08
80>
[9]
{
A0 09 A1 07 A0 05 A0 03 80 01 15 A1 0C A0 0A A0 08 80> [9] {| A0 | | :--- | | 09 | | A1 | | 07 | | A0 | | 05 | | A0 | | 03 | | 80 | | 01 | | 15 | | A1 | | 0C | | A0 | | 0A | | A0 | | 08 | | 80> | | [9] | | { |
<A9 3A
<30 1B 07 A0 05 A0 03 80 01 15 0C 0A 08 02 01>
893827
<A0 0B 893829 A0 09 A1 07 A0 05 A0 03 80 01 15>
11 [0] {
<A0 09 A1 07 A0 05 A0 03 80 01 15>
9: [0] {
<A1 07 A0 05 A0 03 80 01 15 > 15 > 15 >15>
893833 7: [1] {
<A0 05 A0 03 80 01 15>
893835
[0]
{
[0] {
[0] { [0] {| [0] | | :--- | | { | | [0] { |
80 01 15>
893837 -
[0]
{ { {\{
[0] {
[0] { [0] {| [0] | | :--- | | $\{$ | | [0] { |
<80 01 15>
893839
\square
}
}
)
}
<A1 0C A0 0A A0 08 80 02 01 81 81 02 01 E0>
893842 12: [1] { { {\{
A0 08 80 02 01 81 81 02 01 E0>
893844 10: [0] {
<A0 08 80 02 01 81 81 02 01 E0>
893846 8: [0] {
< 8002 < 8002 < 8002<8002
893848 \square 2: [0] 01 81
<81 02 01 E0
893852
[1] 01 E0
}
:
} :| } | | :--- | | : |
}
}
}
<30 1B A0 0B A0 09 A1 07 SEQU A0 NCE A0 03 80 16 0C 0A 08 02 02>
<A0 OB A0 09 07 A0 05 A0 03 80 01 16>
11 [0] {
<A0 09 A1 A0 05 A0 03 80 01 16> 6>
893860 9: [0]
<A1 07  《ISO IEC 39794-5-2019》 A0 05 03 80 01 16 > 16 > 16 >16>
893862 7 [1] {
<A0 05 A0 03 80 01 16>
5: [0] {
<A0 03 80 01
893866 3: [0] {
<80 01 16 > 16 > 16 >16>
893868 1: [0] 16
}
}
} <A6 0F A0 03 80 01 00 A 03 80 01 0F A2 03 80 1E> 893808 15: [6] { <A0 03 80 01 00> 893810 3: [0] { < 8001 00> 893812 1: [0] 00 } <A1 03 80 01 0F> 893815 3: [1] { <80 01 OF> 893817 1: [0] 0F } <A2 03 80 01 1E> 893820 3: qquad [2] { < 8001 1E> 09 [0] 1E } } - "A0 09 A1 07 A0 05 A0 03 80 01 15 A1 0C A0 0A A0 08 80> [9] {" <A9 3A <30 1B 07 A0 05 A0 03 80 01 15 0C 0A 08 02 01> 893827 <A0 0B 893829 A0 09 A1 07 A0 05 A0 03 80 01 15> 11 [0] { <A0 09 A1 07 A0 05 A0 03 80 01 15> 9: [0] { <A1 07 A0 05 A0 03 80 01 15 > 893833 7: [1] { <A0 05 A0 03 80 01 15> 893835 "[0] { [0] {" 80 01 15> 893837 - "[0] { [0] {" <80 01 15> 893839 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-091.jpg?height=107&width=70&top_left_y=5988&top_left_x=1611 ◻ https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-091.jpg?height=139&width=146&top_left_y=6117&top_left_x=3032 } } https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-091.jpg?height=123&width=39&top_left_y=6377&top_left_x=2413 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-091.jpg?height=139&width=161&top_left_y=6377&top_left_x=2604 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-091.jpg?height=222&width=154&top_left_y=6301&top_left_x=2978 ) } <A1 0C A0 0A A0 08 80 02 01 81 81 02 01 E0> 893842 12: [1] { A0 08 80 02 01 81 81 02 01 E0> 893844 10: [0] { <A0 08 80 02 01 81 81 02 01 E0> 893846 8: [0] { < 8002 893848 ◻ 2: [0] 01 81 <81 02 01 E0 893852 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-091.jpg?height=123&width=77&top_left_y=8035&top_left_x=1611 [1] 01 E0 "} :" } } } <30 1B A0 0B A0 09 A1 07 SEQU A0 NCE A0 03 80 16 0C 0A 08 02 02> <A0 OB A0 09 07 A0 05 A0 03 80 01 16> 11 [0] { <A0 09 A1 A0 05 A0 03 80 01 16> 6> 893860 9: [0] <A1 07 A0 05 03 80 01 16 > 893862 7 [1] { <A0 05 A0 03 80 01 16> 5: [0] { <A0 03 80 01 893866 3: [0] { <80 01 16 > 893868 1: [0] 16 } } | } | | | | | | | | | | | | | | | | | | | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | <A6 0F | A0 | 03 | 80 | 01 | 00 | A | 03 | 80 | 01 | 0F | A2 | 03 | 80 | 1E> | | | | | | 893808 | | 15: | | | | | [6] | { | | | | | | | | | | | | <A0 03 | 80 | 01 00> | | | | | | | | | | | | | | | | | | 893810 | | 3: | | | | | | [0] { | | | | | | | | | | | | $<8001$ | 00> | | | | | | | | | | | | | | | | | | | 893812 | | 1: | | | | | | [0] | 00 | | | | | | | | | | | | | | | | | | | } | | | | | | | | | | | | <A1 03 | 80 | 01 0F> | | | | | | | | | | | | | | | | | | 893815 | | 3: | | | | | | [1] { | | | | | | | | | | | | <80 01 | OF> | | | | | | | | | | | | | | | | | | | 893817 | | 1: | | | | | | [0] | 0F | | | | | | | | | | | | | | | | | | | } | | | | | | | | | | | | <A2 03 | 80 | 01 1E> | | | | | | | | | | | | | | | | | | 893820 | | 3: | $\qquad$ | | | | | [2] { | | | | | | | | | | | | $<8001$ | 1E> | | | | | | | | | | | | | | | | | | | | 09 | | | | | | | [0] 1E } | | | | | | | | | | | | } | | | | | | | | | | | | | | | | | | | | - | | | | | | | | | | | | | | | | | | | | | | | | | | A0 <br> 09 <br> A1 <br> 07 <br> A0 <br> 05 <br> A0 <br> 03 <br> 80 <br> 01 <br> 15 <br> A1 <br> 0C <br> A0 <br> 0A <br> A0 <br> 08 <br> 80> <br> [9] <br> { | | | | | | | | | | | | | | | | | | | <A9 3A | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | <30 1B | | | | | | 07 | A0 | 05 | A0 | 03 | 80 | 01 | 15 | 0C | 0A | 08 | 02 | 01> | | 893827 | | | | | | | | | | | | | | | | | | | | <A0 0B 893829 | | | | | | A0 | 09 | A1 | 07 | A0 | 05 | A0 | 03 | 80 | 01 | 15> | | | | | | | | | | | | | | | | 11 | | | | | [0] | { | | | | | | | | | | | | <A0 09 | A1 | 07 | A0 | 05 | A0 | 03 | 80 | 01 | 15> | | | | | | | | | | | | | 9: | | | | | | [0] | { | | | | | | | | | | | <A1 07 | A0 | 05 | A0 | 03 | 80 | 01 | $15>$ | | | | | | | | | | | | | 893833 | | 7: | | | | | | | [1] { | | | | | | | | | | | <A0 05 | A0 | 03 | 80 | 01 | 15> | | | | | | | | | | | | | | | 893835 | | | | | | | [0] <br> { <br> [0] { | | | | | | | | | | | | | | 80 | 01 15> | | | | | | | | | | | | | | | | | | 893837 | | - | | | | | | | [0] <br> $\{$ <br> [0] { | | | | | | | | | | | <80 01 15> | | | | | | | | | | | | | | | | | | | | 893839 | | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-091.jpg?height=107&width=70&top_left_y=5988&top_left_x=1611) | | | | | | | | | | | | | | | | | | | | $\square$ | | | | | | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-091.jpg?height=139&width=146&top_left_y=6117&top_left_x=3032) | } | | | | | | | | | | | | | | | | | | | } | | | | | | | | | | | | | | | | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-091.jpg?height=123&width=39&top_left_y=6377&top_left_x=2413) | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-091.jpg?height=139&width=161&top_left_y=6377&top_left_x=2604) | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-091.jpg?height=222&width=154&top_left_y=6301&top_left_x=2978) | | | | | | | | | | | | | | | ) | | | | | | | | | | | | | | | | | | | | } | | | | | | | | | | | | | | | | <A1 0C | A0 | 0A A0 | 08 | | 80 | 02 | 01 | 81 | 81 | 02 | 01 | E0> | | | | | | | | 893842 | | 12: | | | | | [1] | $\{$ | | | | | | | | | | | | | A0 | 08 | 80 | 02 | 01 | 81 | 81 | 02 | 01 | E0> | | | | | | | | | | 893844 | | 10: | | | | | | [0] | { | | | | | | | | | | | <A0 08 | 80 | 02 | 01 | 81 | 81 | 02 | 01 | E0> | | | | | | | | | | | | 893846 | | 8: | | | | | | | [0] | { | | | | | | | | | | $<8002$ | | | | | | | | | | | | | | | | | | | | 893848 | $\square$ | 2: | | | | | | | [0] | | 01 | 81 | | | | | | | | <81 02 | 01 | E0 | | | | | | | | | | | | | | | | | | 893852 | | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-091.jpg?height=123&width=77&top_left_y=8035&top_left_x=1611) | | [1] 01 E0 | | | | | | | | | | | | | | | | | | } <br> : | | | | | | | | | | | | | | | | | | | | | | | | } | | | | | | | | | | | | | | | | | | | | } | | | | | | | | | | | | | | | | | | | | } | | | | | | | | | | | | | | | <30 1B | A0 | 0B | A0 | 09 | A1 | 07 | SEQU | A0 | NCE | A0 | 03 | 80 | 16 | 0C | 0A | 08 | 02 | 02> | | <A0 OB | A0 | 09 | 07 | | A0 | 05 | A0 | 03 | 80 | 01 | 16> | | | | | | | | | | | 11 | | | | | [0] | { | | | | | | | | | | | | <A0 09 | A1 | | A0 | 05 | A0 | 03 | 80 | 01 | 16> | 6> | | | | | | | | | | 893860 | | 9: | | | | | | [0] | | | | | | | | | | | | <A1 07 | A0 | 05 | 03 | | 80 | 01 | $16>$ | | | | | | | | | | | | | 893862 | | 7 | | | | | | | [1] | { | | | | | | | | | | <A0 05 | A0 | 03 | 80 | 01 | 16> | | | | | | | | | | | | | | | | | 5: | | | | | | | | [0] { | | | | | | | | | | <A0 03 | 80 | 01 | | | | | | | | | | | | | | | | | | 893866 | | 3: | | | | | | | | [0] { | | | | | | | | | | <80 01 | $16>$ | | | | | | | | | | | | | | | | | | | 893868 | | 1: | | | | | | | | | [0] | 16 | | | | | | | | | | | | | | | | | | } | | | | | | | | | | | | | | | | | | | | } | | | | | | | | |

ISO/IEC 39794-5:2019(E)

B. 2 Example encodings for a face image XML document
B.2 臉部影像 XML 文件編碼範例

An encoding example based on the XSD schema in Annex A. 2 is given below. This example encoding is available at http://standards.iso.org/iso-iec/39794/-5/ed-1/en.
以下提供一個基於附錄 A.2 中 XSD 架構的編碼範例。此編碼範例可於 http://standards.iso.org/iso-iec/39794/-5/ed-1/en 取得。
NOTE The example available at http://standards.iso.org/iso-iec/39794/-5/ed-1/en possesses full contents of the 2D face JPEG image data encoded within the base64Binary type value field of the “representationData2D” element. The value is replaced below in a truncated version (with full length in the brackets) for simplicity.
註記:http://standards.iso.org/iso-iec/39794/-5/ed-1/en 提供的範例包含完整編碼於「representationData2D」元素 base64Binary 類型值欄位中的 2D 臉部 JPEG 影像資料。為簡潔起見,下方以截短版本(括號內標示完整長度)取代實際數值。
<?xml version="1.0" encoding="utf-8"?>
<fac:faceImageData xmlns:cmn="http://standards.iso.org/iso-iec/39794/-1"
xmlns:fac="http://standards.iso.org/iso-iec/39794/-5">
    <fac:versionBlock>
        <cmn:generation>3</cmn:generation>
        <cmn:year>2019</cmn:year>
    </fac:versionBlock>
    <fac:representationBlocks>
        <fac:representationBlock>
            <fac:representationId>1</fac:representationId>
            <fac:imageRepresentation>
                <fac:base>
                    <fac:imageRepresentation2DBlock>
                        <fac:representationData2D>
                            /9j/4AAQSkZJRgABAQ... (191412 bytes)
                        </fac:representationData2D>
                        <fac:imageInformation2DBlock>
                            <fac:imageDataFormat>
                                <fac:code>
                                    <fac:jpeg>2</fac:jpeg>
                                </fac:code>
                            </fac:imageDataFormat>
                            <fac:faceImageKind2D>
                                <fac:code>
                                    <fac:generalPurpose>1</fac:generalPurpose>
                                </fac:code>
                            </fac:faceImageKind2D>
                        </fac:imageInformation2DBlock>
                        <fac:captureDevice2DBlock>
                            <fac:captureDeviceTechnologyId2D>
                                <fac:code>
                                    <fac:staticPhotographFromUnknownSource>1</fac:staticPhotographFromUnknow
nSource>
                                </fac:code>
                            </fac:captureDeviceTechnologyId2D>
                        </fac:captureDevice2DBlock>
                    </fac:imageRepresentation2DBlock>
                </fac:base>
</fac:imageRepresentation>
<fac:captureDateTimeBlock>
    <cmn:year>2019</cmn:year>
    <cmn:month>7</cmn:month>
    <cmn:day>8</cmn:day>
</fac:captureDateTimeBlock>
<fac:qualityBlocks>
    <cmn:qualityBlock>
        <cmn:algorithmIdBlock>
            <cmn:organization>42</cmn:organization>
            <cmn:id>4711</cmn:id>
        </cmn:algorithmIdBlock>
        <cmn:scoreOrError>
            <cmn:score>50</cmn:score>
        </cmn:scoreOrError>
    </cmn:qualityBlock>
    <cmn:qualityBlock>
        <cmn:algorithmIdBlock>
            <cmn:organization>7743</cmn:organization>
            <cmn:id>1650</cmn:id>
        </cmn:algorithmIdBlock>
        <cmn:scoreOrError>
            <cmn:error>
                <cmn:code>
                    <cmn:failureToAssess>0</cmn:failureToAssess>
                </cmn:code>
            </cmn:error>
        </cmn:scoreOrError>
    </cmn:qualityBlock>
</fac:qualityBlocks>
<fac:padDataBlock>
    <cmn:decision>
        <cmn:code>
            <cmn:attack>1</cmn:attack>
        </cmn:code>
    </cmn:decision>
    <cmn:scoreBlocks>
        <cmn:scoreBlock>
            <cmn:mechanismIdBlock>
                <cmn:organization>42</cmn:organization>
                <cmn:id>4711</cmn:id>
            </cmn:mechanismIdBlock>
            <cmn:scoreOrError>
                <cmn:score>42</cmn:score>
            </cmn:scoreOrError>
        </cmn:scoreBlock>
    </cmn:scoreBlocks>
    <cmn:captureContext>
        <cmn:code>
            <cmn:enrolment>0</cmn:enrolment>
        </cmn:code>
    </cmn:captureContext>
    <cmn:supervisionLevel>
        <cmn:code>
            <cmn:unattended>4</cmn:unattended>
        </cmn:code>
    </cmn:supervisionLevel>
    <cmn:riskLevel>100</cmn:riskLevel>
    <cmn:criteriaCategory>
        <cmn:code>
            <cmn:common>2</cmn:common>
        </cmn:code>
    </cmn:criteriaCategory>
    <cmn:parameter>
        UEFEIHBhcmFtZXRlciBieXRlcw==
    </cmn:parameter>
    <cmn:challenges>
        <cmn:challenge>
            UEFEIGNoYWxsZW5nZSBieXRlcw==
        </cmn:challenge>
    </cmn:challenges>

ISO/IEC 39794-5:2019(E)

</fac:padDataBlock>
<fac:sessionId>429763</fac:sessionId>
<fac:captureDeviceBlock>
    <fac:certificationIdBlocks>
        <cmn:certificationIdBlock>
            <cmn:organization>42</cmn:organization>
            <cmn:id>4711</cmn:id>
        </cmn:certificationIdBlock>
    </fac:certificationIdBlocks>
</fac:captureDeviceBlock>
<fac:identityMetadataBlock>
    <fac:gender>
        <fac:code>
            <fac:male>2</fac:male>
        </fac:code>
    </fac:gender>
    <fac:eyeColour>
        <fac:code>
            <fac:hazel>7</fac:hazel>
        </fac:code>
    </fac:eyeColour>
    <fac:hairColour>
        <fac:code>
            <fac:brown>5</fac:brown>
        </fac:code>
    </fac:hairColour>
    <fac:subjectHeight>180</fac:subjectHeight>
    <fac:propertiesBlock>
        <fac:beard>false</fac:beard>
    </fac:propertiesBlock>
    <fac:expressionBlock>
        <fac:smile>1</fac:smile>
    </fac:expressionBlock>
    <fac:poseAngleBlock>
        <fac:yawAngleBlock>
            <fac:angleValue>0</fac:angleValue>
        </fac:yawAngleBlock>
        <fac:pitchAngleBlock>
            <fac:angleValue>15</fac:angleValue>
        </fac:pitchAngleBlock>
        <fac:rollAngleBlock>
            <fac:angleValue>30</fac:angleValue>
        </fac:rollAngleBlock>
    </fac:poseAngleBlock>
</fac:identityMetadataBlock>
<fac:landmarkBlocks>
    <fac:landmarkBlock>
        <fac:landmarkKind>
        <fac:base>
                <fac:anthropometricLandmark>
            <fac:base>
                        <fac:anthropometricLandmarkName>
                            <fac:code>
                                <fac:centerPointOfPupilLeft>21</fac:centerPointOfPupilLeft>
                            </fac:code>
                        </fac:anthropometricLandmarkName>
            </fac:base>
                </fac:anthropometricLandmark>
        </fac:base>
        </fac:landmarkKind>
        <fac:landmarkCoordinates>
        <fac:base>
                <fac:coordinateCartesian2DBlock>
                    <cmn:x>385</cmn:x>
                    <cmn:y>480</cmn:y>
                </fac:coordinateCartesian2DBlock>
        </fac:base>
        </fac:landmarkCoordinates>
    </fac:landmarkBlock>
    <fac:landmarkBlock>
        <fac:landmarkKind>
                    <fac:base>
                            <fac:anthropometricLandmark>
                        <fac:base>
                                    <fac:anthropometricLandmarkName>
                                        <fac:code>
                                            <fac:centerPointOfPupilRight>22</fac:centerPointOfPupilRight>
                                        </fac:code>
                                </fac:anthropometricLandmarkName>
                        </fac:base>
                        </fac:anthropometricLandmark>
                </fac:base>
                    </fac:landmarkKind>
                    <fac:landmarkCoordinates>
                    <fac:base>
                        <fac:coordinateCartesian2DBlock>
                            <cmn:x>640</cmn:x>
                            <cmn:y>475</cmn:y>
                        </fac:coordinateCartesian2DBlock>
                </fac:base>
                    </fac:landmarkCoordinates>
                </fac:landmarkBlock>
            </fac:landmarkBlocks>
        </fac:representationBlock>
    </fac:representationBlocks>
</fac:faceImageData>

Annex C
(normative)  附錄 C(規範性)

Conformance testing methodology
符合性測試方法論

C. 1 General  C. 1 概述

This annex specifies elements of the conformance testing methodology, test assertions, and test procedures as applicable to this document. Specifically it establishes:
本附錄規範了適用於本文件之符合性測試方法論要素、測試斷言及測試程序。具體而言,其確立了:
  • test assertions of the structure of the face image data format as specified in this document (Type A Level 1),
    本文件所規定之人臉影像資料格式結構的測試斷言(A 型第 1 級),
  • test assertions of internal consistency by checking the types of values that may be contained within each element (Type A Level 2),
    透過檢查每個元素可能包含的數值類型來測試內部一致性的斷言(A 類第 2 級)
  • tests of semantic assertions (Type A Level 3).
    語意斷言的測試(A 類第 3 級)
This conformance testing methodology does not establish:
此符合性測試方法不包含:
  • tests of conformance of CBEFF structures required by ISO/IEC 39794-1,
    ISO/IEC 39794-1 所要求的 CBEFF 結構符合性測試
  • tests of conformance of the image data to the quality-related specifications,
    影像資料是否符合品質相關規範的測試,
  • tests of conformance of the image data blocks to the respective JPEG or JPEG 2000 standards,
    影像資料區塊是否符合各自 JPEG 或 JPEG 2000 標準的測試,
  • tests of other characteristics of biometric products or other types of testing of biometric products (e.g., acceptance, performance, robustness, security).
    生物辨識產品其他特性的測試或其他類型的生物辨識產品測試(例如驗收、效能、穩健性、安全)。
To provide sufficient information about the IUT for the testing laboratory to properly conduct a conformance test and for an appropriate declaration of conformity to be made, the supplier of the IUT shall provide the identification of the supplier and the IUT information in Table C. 1 and also complete the columns IUT support and supported range in Table C. 2 that applies to tested face image extensible BDB format(s). All tables shall be provided to the testing laboratory prior to, or at the same time as, the IUT is provided to the testing laboratory.
為提供足夠資訊讓測試實驗室能正確執行符合性測試並做出適當的符合性聲明,待測物供應商應提供表 C.1 中的供應商識別資訊與待測物資訊,並填寫表 C.2 中適用於測試臉部影像可擴展 BDB 格式的「待測物支援」與「支援範圍」欄位。所有表格應於待測物送交測試實驗室前或同時提供給實驗室。
NOTE W3C maintains a list of tools that can be used to work with xml documents and schemas [35]. ITU-T maintains a list of tools that can be used to work with ASN. 1 documents and schemas [ 36 ] [ 36 ] ^([36]){ }^{[36]}. Validating documents with the schemas will assure all Level 1 conformance issues.
備註:W3C 維護了一份可用於處理 XML 文件與結構描述的工具清單[35]。ITU-T 則維護了可用於處理 ASN.1 文件與結構描述的工具清單 [ 36 ] [ 36 ] ^([36]){ }^{[36]} 。透過結構描述驗證文件可確保所有第一級合規性問題。
Table C. 1 - Identification of the supplier and the IUT
表 C.1 - 供應商與待測系統識別資訊
Supplier name and address
供應商名稱與地址
Contact point for queries about the ICS
ICS 相關查詢聯絡窗口
Implementation name  實作名稱
Implementation version  實作版本
Any other information necessary for full identification of the implementation
完整識別實作所需之其他資訊
Registered BDB format identifier of the format that conformance is claimed to
所聲稱符合性之已註冊 BDB 格式識別碼
Are any mandatory requirements of the standard not fully supported (Yes or No)
該標準的任何強制性要求是否未完全支援(是或否)
Date of statement  聲明日期
Supplier name and address Contact point for queries about the ICS Implementation name Implementation version Any other information necessary for full identification of the implementation Registered BDB format identifier of the format that conformance is claimed to Are any mandatory requirements of the standard not fully supported (Yes or No) Date of statement | Supplier name and address | | | :--- | :--- | | Contact point for queries about the ICS | | | Implementation name | | | Implementation version | | | Any other information necessary for full identification of the implementation | | | Registered BDB format identifier of the format that conformance is claimed to | | | Are any mandatory requirements of the standard not fully supported (Yes or No) | | | Date of statement | |

C. 2 Requirements and options
C. 2 要求與選項

Table C. 2 lists the syntactic options and semantic conformance requirements specified in this document. The supplier of the IUT can explain which optional components are supported and the testing laboratory can note the results of the test. Support is defined as the ability of the used structure to fulfil the requirements automatically without further testing. Support does not mean that the requirement can’t be fulfilled when using the structure, all requirements in this table can be fulfilled for both ASN. 1 and XML.
表 C. 2 列出本文件規定的語法選項與語意一致性要求。IUT 供應商可說明支援哪些選用元件,測試實驗室則可註記測試結果。支援定義為所用結構能自動滿足要求而無需進一步測試的能力。支援並非意指使用該結構時無法滿足要求,本表所列所有要求皆可透過 ASN.1 與 XML 格式達成。
Table C. 2 details the Level 2 conformance tests that a testing organization should perform on an IUT. These Level 2 tests are necessary as the schema validation does not perform those checks. All other Level 1 and Level 2 conformance requirements are tested by schema validation.
表 C. 2 詳細列出了測試機構應對待測物(IUT)執行的第 2 級符合性測試項目。由於架構驗證並未涵蓋這些檢查項目,因此必須進行這些第 2 級測試。其餘所有第 1 級與第 2 級符合性要求,皆可透過架構驗證完成測試。

Table C. 2 - Requirements and options of the data format specification
表 C.2 - 資料格式規範之需求與選項
Provision identifier  條款識別碼 Reference in data format specification
資料格式規範中之參照
Provision summary  條款摘要 Level  等級 Status  狀態 Format type applicability
格式類型適用性
IUT support  IUT 支援 Supported range  支援範圍 Test result  檢查結果
P1 Annex A  附錄 A A face-image data block may contain unknown extensions.
臉部影像資料區塊可能包含未知的擴充功能。
1 and 2  1 和 2 0 Y Y
P2 Annex A  附錄 A A representation block may contain a capture date/time block.
一個表示區塊可能包含擷取日期/時間區塊。
1 and 2  1 和 2 0 Y Y
P3 AnnexA  附錄 A A representation block may contain quality blocks.
一個表示區塊可能包含品質區塊。
1 and 2  1 和 2 0 Y Y
P4 ISO/IEC 39794-1 A quality block may contain unknown extensions.
一個品質區塊可能包含未知的擴展功能。
1 and 2  1 和 2 0 Y Y
P5 Annex A  附錄 A A representation block may contain a PAD data block.
一個表示區塊可能包含一個 PAD 資料區塊。
1 and 2  1 和 2 0 Y Y
P6 ISO/IEC 39794-1 A PAD data block may contain a PAD decision.
一個 PAD 資料區塊可能包含一個 PAD 決策。
1 and 2  1 和 2 0 Y Y
P7 ISO/IEC 39794-1 A PAD data block may contain PAD score blocks.
一個 PAD 資料區塊可能包含 PAD 分數區塊。
1 and 2  1 和 2 0 Y Y
P8 ISO/IEC 39794-1 A PAD data block may contain extended data blocks.
一個 PAD 資料區塊可能包含延伸資料區塊。
1 and 2  1 和 2 0 Y Y
P9 ISO/IEC 39794-1 A PAD data block may contain a context-of-capture field.
一個 PAD 資料區塊可能包含擷取情境欄位。
1 and 2  1 和 2 0 Y Y
P10 ISO/IEC 39794-1 A PAD data block may contain a level-ofsupervision/surveillance field.
PAD 資料區塊可能包含一個監督/監控等級欄位。
1 and 2  1 和 2 0 Y Y
P11 ISO/IEC 39794-1 A PAD data block may contain a risk level field.
PAD 資料區塊可能包含風險等級欄位
1 and 2  1 和 2 0 Y Y
P12 ISO/IEC 39794-1 A PAD data block may contain a category-of-criteria field.
一個 PAD 資料區塊可能包含準則類別欄位。
1 and 2  1 和 2 0 Y Y
P13 ISO/IEC 39794-1 A PAD data block may contain a PAD parameters field.
一個 PAD 資料區塊可能包含 PAD 參數欄位。
1 and 2  1 和 2 0 Y Y
P14 ISO/IEC 39794-1 A PAD data block may contain PAD challenges.
一個 PAD 資料區塊可能包含 PAD 挑戰。
1 and 2  1 和 2 0 Y Y
P15 ISO/IEC 39794-1 A PAD data block may contain a PAD capture date/time field.
PAD 資料區塊可能包含一個 PAD 擷取日期/時間欄位。
1 and 2  1 和 2 0 Y Y
P16 Annex A  附錄 A A representation block may contain a session identifier.
表示區塊可能包含一個工作階段識別碼。
1 and 2  1 和 2 0 Y Y
P17 Annex A  附錄 A A representation block may contain a derived-from identifier.
一個表示區塊可能包含一個衍生自識別碼的內容。
1 and 2  1 和 2 0 Y Y
P18 Annex A  附錄 A A representation block may contain a capture device block.
一個表示區塊可能包含一個擷取裝置區塊。
1 and 2  1 和 2 0 Y Y
P19 Annex A  附錄 A A capture device block may contain a model identifier block.
擷取裝置區塊可能包含型號識別碼區塊。
1 and 2  1 和 2 0 Y Y
P20 Annex A  附錄 A A capture device block may contain certification identifier blocks.
擷取裝置區塊可能包含認證識別碼區塊。
1 and 2  1 和 2 0 Y Y
P21 Annex A  附錄 A A capture device block may contain unknown extensions.
擷取裝置區塊可能包含未知的擴充功能。
1 and 2  1 和 2 0 Y Y
P22 Annex A  附錄 A A representation block may contain an identity metadata block.
一個表示區塊可能包含一個身份識別元資料區塊。
1 and 2  1 和 2 0 Y Y
P23 Annex A  附錄 A An identity metadata block may contain a gender field.
身分識別元數據區塊可能包含性別欄位。
1 and 2  1 和 2 0 Y Y
Provision identifier Reference in data format specification Provision summary Level Status Format type applicability IUT support Supported range Test result P1 Annex A A face-image data block may contain unknown extensions. 1 and 2 0 Y Y P2 Annex A A representation block may contain a capture date/time block. 1 and 2 0 Y Y P3 AnnexA A representation block may contain quality blocks. 1 and 2 0 Y Y P4 ISO/IEC 39794-1 A quality block may contain unknown extensions. 1 and 2 0 Y Y P5 Annex A A representation block may contain a PAD data block. 1 and 2 0 Y Y P6 ISO/IEC 39794-1 A PAD data block may contain a PAD decision. 1 and 2 0 Y Y P7 ISO/IEC 39794-1 A PAD data block may contain PAD score blocks. 1 and 2 0 Y Y P8 ISO/IEC 39794-1 A PAD data block may contain extended data blocks. 1 and 2 0 Y Y P9 ISO/IEC 39794-1 A PAD data block may contain a context-of-capture field. 1 and 2 0 Y Y P10 ISO/IEC 39794-1 A PAD data block may contain a level-ofsupervision/surveillance field. 1 and 2 0 Y Y P11 ISO/IEC 39794-1 A PAD data block may contain a risk level field. 1 and 2 0 Y Y P12 ISO/IEC 39794-1 A PAD data block may contain a category-of-criteria field. 1 and 2 0 Y Y P13 ISO/IEC 39794-1 A PAD data block may contain a PAD parameters field. 1 and 2 0 Y Y P14 ISO/IEC 39794-1 A PAD data block may contain PAD challenges. 1 and 2 0 Y Y P15 ISO/IEC 39794-1 A PAD data block may contain a PAD capture date/time field. 1 and 2 0 Y Y P16 Annex A A representation block may contain a session identifier. 1 and 2 0 Y Y P17 Annex A A representation block may contain a derived-from identifier. 1 and 2 0 Y Y P18 Annex A A representation block may contain a capture device block. 1 and 2 0 Y Y P19 Annex A A capture device block may contain a model identifier block. 1 and 2 0 Y Y P20 Annex A A capture device block may contain certification identifier blocks. 1 and 2 0 Y Y P21 Annex A A capture device block may contain unknown extensions. 1 and 2 0 Y Y P22 Annex A A representation block may contain an identity metadata block. 1 and 2 0 Y Y P23 Annex A An identity metadata block may contain a gender field. 1 and 2 0 Y Y | Provision identifier | Reference in data format specification | Provision summary | Level | Status | Format type applicability | | IUT support | Supported range | Test result | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | P1 | Annex A | A face-image data block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P2 | Annex A | A representation block may contain a capture date/time block. | 1 and 2 | 0 | Y | Y | | | | | P3 | AnnexA | A representation block may contain quality blocks. | 1 and 2 | 0 | Y | Y | | | | | P4 | ISO/IEC 39794-1 | A quality block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P5 | Annex A | A representation block may contain a PAD data block. | 1 and 2 | 0 | Y | Y | | | | | P6 | ISO/IEC 39794-1 | A PAD data block may contain a PAD decision. | 1 and 2 | 0 | Y | Y | | | | | P7 | ISO/IEC 39794-1 | A PAD data block may contain PAD score blocks. | 1 and 2 | 0 | Y | Y | | | | | P8 | ISO/IEC 39794-1 | A PAD data block may contain extended data blocks. | 1 and 2 | 0 | Y | Y | | | | | P9 | ISO/IEC 39794-1 | A PAD data block may contain a context-of-capture field. | 1 and 2 | 0 | Y | Y | | | | | P10 | ISO/IEC 39794-1 | A PAD data block may contain a level-ofsupervision/surveillance field. | 1 and 2 | 0 | Y | Y | | | | | P11 | ISO/IEC 39794-1 | A PAD data block may contain a risk level field. | 1 and 2 | 0 | Y | Y | | | | | P12 | ISO/IEC 39794-1 | A PAD data block may contain a category-of-criteria field. | 1 and 2 | 0 | Y | Y | | | | | P13 | ISO/IEC 39794-1 | A PAD data block may contain a PAD parameters field. | 1 and 2 | 0 | Y | Y | | | | | P14 | ISO/IEC 39794-1 | A PAD data block may contain PAD challenges. | 1 and 2 | 0 | Y | Y | | | | | P15 | ISO/IEC 39794-1 | A PAD data block may contain a PAD capture date/time field. | 1 and 2 | 0 | Y | Y | | | | | P16 | Annex A | A representation block may contain a session identifier. | 1 and 2 | 0 | Y | Y | | | | | P17 | Annex A | A representation block may contain a derived-from identifier. | 1 and 2 | 0 | Y | Y | | | | | P18 | Annex A | A representation block may contain a capture device block. | 1 and 2 | 0 | Y | Y | | | | | P19 | Annex A | A capture device block may contain a model identifier block. | 1 and 2 | 0 | Y | Y | | | | | P20 | Annex A | A capture device block may contain certification identifier blocks. | 1 and 2 | 0 | Y | Y | | | | | P21 | Annex A | A capture device block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P22 | Annex A | A representation block may contain an identity metadata block. | 1 and 2 | 0 | Y | Y | | | | | P23 | Annex A | An identity metadata block may contain a gender field. | 1 and 2 | 0 | Y | Y | | | |
Table C. 2 (continued)
表 C. 2(續)
Provision identifier  條款識別碼 Reference in data format specification
資料格式規範參考
Provision summary  條款摘要 Level  層級 Status  狀態 Format type applicability
格式類型適用性
IUT support  IUT 支援 Supported range  支援範圍 Test result  檢查結果
P24 Annex A  附錄 A An identity metadata block may contain an eye colour field.
身份識別元數據區塊可能包含眼睛顏色欄位。
1 and 2  1 和 2 0 Y Y
P25 Annex A  附錄 A An identity metadata block may contain a hair colour field.
身分識別元資料區塊可能包含髮色欄位。
1 and 2  1 和 2 0 Y Y
P26 AnnexA  附錄 A An identity metadata block may contain a subject height field.
身分識別元資料區塊可能包含主體身高欄位。
1 and 2  1 和 2 0 Y Y
P27 Annex A  附錄 A An identity metadata block may contain a properties block.
身分識別元資料區塊可包含屬性區塊。
1 and 2  1 和 2 0 Y Y
P28 Annex A  附錄 A A properties block may contain a glasses field.
屬性區塊可能包含眼鏡欄位。
1 and 2  1 和 2 0 Y Y
P29 AnnexA  附錄 A A properties block may contain a moustache field.
屬性區塊可能包含鬍鬚欄位。
1 and 2  1 和 2 0 Y Y
P30 Annex A  附錄 A A properties block may contain a beard field.
屬性區塊可能包含鬍鬚欄位。
1 and 2  1 和 2 0 Y Y
P31 Annex A  附錄 A A properties block may contain a teeth-visible field.
屬性區塊可能包含牙齒可見欄位。
1 and 2  1 和 2 0 Y Y
P32 AnnexA  附錄 A A properties block may contain a pupil-or-iris-not-visible field.
屬性區塊可能包含瞳孔或虹膜不可見欄位。
1 and 2  1 和 2 0 Y Y
P33 Annex A  附錄 A A properties block may contain a mouth-open field.
屬性區塊可能包含一個嘴巴張開欄位。
1 and 2  1 和 2 0 Y Y
P34 Annex A  附錄 A A properties block may contain a left-eye-patch field.
屬性區塊可能包含一個左眼眼罩欄位。
1 and 2  1 和 2 0 Y Y
P35 Annex A  附錄 A A properties block may contain a right-eye-patch field.
屬性區塊可能包含一個右眼眼罩欄位。
1 and 2  1 和 2 0 Y Y
P36 Annex A  附錄 A A properties block may contain a dark-glasses field.
屬性區塊可能包含一個深色眼鏡欄位。
1 and 2  1 和 2 0 Y Y
P37 Annex A  附錄 A A properties block may contain a biometric-absent field.
屬性區塊可能包含生物特徵缺失欄位。
1 and 2  1 和 2 0 Y Y
P38 Annex A  附錄 A A properties block may contain a head-coveringspresent field.
屬性區塊可能包含頭部覆蓋物存在欄位。
1 and 2  1 和 2 0 Y Y
P39 Annex A  附錄 A A properties block may contain unknown extensions.
屬性區塊可能包含未知擴充欄位。
1 and 2  1 和 2 0 Y Y
P40 Annex A  附錄 A An identity metadata block may contain an expression block.
身份識別元資料區塊可能包含表情區塊。
1 and 2  1 和 2 0 Y Y
P41 Annex A  附錄 A An expression block may contain a neutral field.
一個表情區塊可以包含中性欄位。
1 and 2  1 和 2 0 Y Y
P42 Annex A  附錄 A An expression block may contain a smile field.
一個表情區塊可以包含微笑欄位。
1 and 2  1 和 2 0 Y Y
P43 Annex A  附錄 A An expression block may contain a raised-eyebrows field.
一個表情區塊可以包含挑眉欄位。
1 and 2  1 和 2 0 Y Y
P44 Annex A  附錄 A An expression block may contain an eyes-looking-away-from-the-camera field.
一個表情區塊可以包含視線偏離鏡頭欄位。
1 and 2  1 和 2 0 Y Y
P45 Annex A  附錄 A An expression block may contain a squinting field.
一個表達區塊可能包含斜視欄位。
1 and 2  1 和 2 0 Y Y
P46 Annex A  附錄 A An expression block may contain a frowning field.
一個表達區塊可能包含皺眉欄位。
1 and 2  1 和 2 0 Y Y
Provision identifier Reference in data format specification Provision summary Level Status Format type applicability IUT support Supported range Test result P24 Annex A An identity metadata block may contain an eye colour field. 1 and 2 0 Y Y P25 Annex A An identity metadata block may contain a hair colour field. 1 and 2 0 Y Y P26 AnnexA An identity metadata block may contain a subject height field. 1 and 2 0 Y Y P27 Annex A An identity metadata block may contain a properties block. 1 and 2 0 Y Y P28 Annex A A properties block may contain a glasses field. 1 and 2 0 Y Y P29 AnnexA A properties block may contain a moustache field. 1 and 2 0 Y Y P30 Annex A A properties block may contain a beard field. 1 and 2 0 Y Y P31 Annex A A properties block may contain a teeth-visible field. 1 and 2 0 Y Y P32 AnnexA A properties block may contain a pupil-or-iris-not-visible field. 1 and 2 0 Y Y P33 Annex A A properties block may contain a mouth-open field. 1 and 2 0 Y Y P34 Annex A A properties block may contain a left-eye-patch field. 1 and 2 0 Y Y P35 Annex A A properties block may contain a right-eye-patch field. 1 and 2 0 Y Y P36 Annex A A properties block may contain a dark-glasses field. 1 and 2 0 Y Y P37 Annex A A properties block may contain a biometric-absent field. 1 and 2 0 Y Y P38 Annex A A properties block may contain a head-coveringspresent field. 1 and 2 0 Y Y P39 Annex A A properties block may contain unknown extensions. 1 and 2 0 Y Y P40 Annex A An identity metadata block may contain an expression block. 1 and 2 0 Y Y P41 Annex A An expression block may contain a neutral field. 1 and 2 0 Y Y P42 Annex A An expression block may contain a smile field. 1 and 2 0 Y Y P43 Annex A An expression block may contain a raised-eyebrows field. 1 and 2 0 Y Y P44 Annex A An expression block may contain an eyes-looking-away-from-the-camera field. 1 and 2 0 Y Y P45 Annex A An expression block may contain a squinting field. 1 and 2 0 Y Y P46 Annex A An expression block may contain a frowning field. 1 and 2 0 Y Y | Provision identifier | Reference in data format specification | Provision summary | Level | Status | Format type applicability | | IUT support | Supported range | Test result | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | P24 | Annex A | An identity metadata block may contain an eye colour field. | 1 and 2 | 0 | Y | Y | | | | | P25 | Annex A | An identity metadata block may contain a hair colour field. | 1 and 2 | 0 | Y | Y | | | | | P26 | AnnexA | An identity metadata block may contain a subject height field. | 1 and 2 | 0 | Y | Y | | | | | P27 | Annex A | An identity metadata block may contain a properties block. | 1 and 2 | 0 | Y | Y | | | | | P28 | Annex A | A properties block may contain a glasses field. | 1 and 2 | 0 | Y | Y | | | | | P29 | AnnexA | A properties block may contain a moustache field. | 1 and 2 | 0 | Y | Y | | | | | P30 | Annex A | A properties block may contain a beard field. | 1 and 2 | 0 | Y | Y | | | | | P31 | Annex A | A properties block may contain a teeth-visible field. | 1 and 2 | 0 | Y | Y | | | | | P32 | AnnexA | A properties block may contain a pupil-or-iris-not-visible field. | 1 and 2 | 0 | Y | Y | | | | | P33 | Annex A | A properties block may contain a mouth-open field. | 1 and 2 | 0 | Y | Y | | | | | P34 | Annex A | A properties block may contain a left-eye-patch field. | 1 and 2 | 0 | Y | Y | | | | | P35 | Annex A | A properties block may contain a right-eye-patch field. | 1 and 2 | 0 | Y | Y | | | | | P36 | Annex A | A properties block may contain a dark-glasses field. | 1 and 2 | 0 | Y | Y | | | | | P37 | Annex A | A properties block may contain a biometric-absent field. | 1 and 2 | 0 | Y | Y | | | | | P38 | Annex A | A properties block may contain a head-coveringspresent field. | 1 and 2 | 0 | Y | Y | | | | | P39 | Annex A | A properties block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P40 | Annex A | An identity metadata block may contain an expression block. | 1 and 2 | 0 | Y | Y | | | | | P41 | Annex A | An expression block may contain a neutral field. | 1 and 2 | 0 | Y | Y | | | | | P42 | Annex A | An expression block may contain a smile field. | 1 and 2 | 0 | Y | Y | | | | | P43 | Annex A | An expression block may contain a raised-eyebrows field. | 1 and 2 | 0 | Y | Y | | | | | P44 | Annex A | An expression block may contain an eyes-looking-away-from-the-camera field. | 1 and 2 | 0 | Y | Y | | | | | P45 | Annex A | An expression block may contain a squinting field. | 1 and 2 | 0 | Y | Y | | | | | P46 | Annex A | An expression block may contain a frowning field. | 1 and 2 | 0 | Y | Y | | | |
Table C. 2 (continued)
表 C. 2(續)
Provision identifier  條款識別碼 Reference in data format specification
資料格式規範中之參照
Provision summary  條款摘要 Level  層級 Status  狀態 Format type applicability
格式類型適用性
IUT support  IUT 支援 Supported range  支援範圍 Test result  檢查結果
P47 Annex A  附錄 A An expression block may contain unknown extensions.
表達式區塊可能包含未知的擴充功能。
1 and 2  1 和 2 0 Y Y
P48 Annex A  附錄 A An identity metadata block may contain a pose angle block.
身份識別元數據區塊可能包含姿勢角度區塊。
1 and 2  1 和 2 0 Y Y
P49 Annex A  附錄 A A pose angle block may contain a yaw angle block.
姿勢角度區塊可能包含偏航角度區塊。
1 and 2  1 和 2 0 Y Y
P50 Annex A  附錄 A A yaw angle block may contain an angle value field.
偏航角度區塊可能包含角度數值欄位。
1 and 2  1 和 2 0 Y Y
P51 Annex A  附錄 A A yaw angle block may contain an angle uncertainty field.
偏航角度區塊可能包含角度不確定性欄位。
1 and 2  1 和 2 0 Y Y
P52 Annex A  附錄 A A yaw angle block may contain unknown extensions.
偏航角區塊可能包含未知的擴展項目。
1 and 2  1 和 2 0 Y Y
P53 Annex A  附錄 A A pose angle block may contain a pitch angle block.
姿態角區塊可能包含俯仰角區塊。
1 and 2  1 和 2 0 Y Y
P54 Annex A  附錄 A A pitch angle block may contain an angle value field.
俯仰角區塊可能包含角度數值欄位。
1 and 2  1 和 2 0 Y Y
P55 Annex A  附錄 A A pitch angle block may contain an angle uncertainty field.
俯仰角區塊可能包含角度不確定性欄位。
1 and 2  1 和 2 0 Y Y
P56 Annex A  附錄 A A pitch angle block may contain unknown extensions.
一個俯仰角區塊可能包含未知的擴展。
1 and 2  1 和 2 0 Y Y
P57 Annex A  附錄 A A pose angle block may contain a roll angle block.
姿勢角度區塊可能包含一個翻轉角度區塊。
1 and 2  1 和 2 0 Y Y
P58 Annex A  附錄 A A roll angle block may contain an angle value field.
滾轉角度區塊可能包含一個角度值欄位。
1 and 2  1 和 2 0 Y Y
P59 Annex A  附錄 A A roll angle block may contain an angle uncertainty field.
滾轉角度區塊可能包含一個角度不確定性欄位。
1 and 2  1 和 2 0 Y Y
P60 Annex A  附錄 A A roll angle block may contain unknown extensions.
滾轉角度區塊可能包含未知的擴充功能。
1 and 2  1 和 2 0 Y Y
P61 Annex A  附錄 A An identity metadata block may contain unknown extensions.
身分識別元資料區塊可能包含未知的擴充功能。
1 and 2  1 和 2 0 Y Y
P62 Annex A  附錄 A A representation block may contain landmarks blocks.
表示區塊可能包含地標區塊。
1 and 2  1 和 2 0 Y Y
P63 Annex A  附錄 A A landmark block may contain a landmark kind value.
一個地標區塊可能包含一個地標種類值。
1 and 2  1 和 2 0 Y Y
P64 AnnexA  附錄 A A landmark block may contain a landmark coordinates block.
一個地標區塊可能包含一個地標座標區塊。
1 and 2  1 和 2 0 Y Y
P65 Annex A  附錄 A A landmark co-ordinates block may contain a 2D Cartesian coordinates block.
地標座標區塊可能包含一個 2D 笛卡爾座標區塊。
1 and 2  1 和 2 0 Y Y
P66 Annex A  附錄 A A landmark co-ordinates block may contain a texture image coordinates block.
地標座標區塊可能包含紋理影像座標區塊。
1 and 2  1 和 2 0 Y Y
P67 Annex A  附錄 A A landmark coordinates block may contain a 3D Cartesian coordinates block.
地標座標區塊可能包含一個 3D 笛卡爾座標區塊。
1 and 2  1 和 2 0 Y Y
P68 Annex A  附錄 A A landmark coordinates block may contain unknown extensions.
地標座標區塊可能包含未知的擴充功能。
1 and 2  1 和 2 0 Y Y
P69 Annex A  附錄 A A landmark block may contain unknown extensions.
地標區塊可能包含未知的擴充功能。
1 and 2  1 和 2 0 Y Y
Provision identifier Reference in data format specification Provision summary Level Status Format type applicability IUT support Supported range Test result P47 Annex A An expression block may contain unknown extensions. 1 and 2 0 Y Y P48 Annex A An identity metadata block may contain a pose angle block. 1 and 2 0 Y Y P49 Annex A A pose angle block may contain a yaw angle block. 1 and 2 0 Y Y P50 Annex A A yaw angle block may contain an angle value field. 1 and 2 0 Y Y P51 Annex A A yaw angle block may contain an angle uncertainty field. 1 and 2 0 Y Y P52 Annex A A yaw angle block may contain unknown extensions. 1 and 2 0 Y Y P53 Annex A A pose angle block may contain a pitch angle block. 1 and 2 0 Y Y P54 Annex A A pitch angle block may contain an angle value field. 1 and 2 0 Y Y P55 Annex A A pitch angle block may contain an angle uncertainty field. 1 and 2 0 Y Y P56 Annex A A pitch angle block may contain unknown extensions. 1 and 2 0 Y Y P57 Annex A A pose angle block may contain a roll angle block. 1 and 2 0 Y Y P58 Annex A A roll angle block may contain an angle value field. 1 and 2 0 Y Y P59 Annex A A roll angle block may contain an angle uncertainty field. 1 and 2 0 Y Y P60 Annex A A roll angle block may contain unknown extensions. 1 and 2 0 Y Y P61 Annex A An identity metadata block may contain unknown extensions. 1 and 2 0 Y Y P62 Annex A A representation block may contain landmarks blocks. 1 and 2 0 Y Y P63 Annex A A landmark block may contain a landmark kind value. 1 and 2 0 Y Y P64 AnnexA A landmark block may contain a landmark coordinates block. 1 and 2 0 Y Y P65 Annex A A landmark co-ordinates block may contain a 2D Cartesian coordinates block. 1 and 2 0 Y Y P66 Annex A A landmark co-ordinates block may contain a texture image coordinates block. 1 and 2 0 Y Y P67 Annex A A landmark coordinates block may contain a 3D Cartesian coordinates block. 1 and 2 0 Y Y P68 Annex A A landmark coordinates block may contain unknown extensions. 1 and 2 0 Y Y P69 Annex A A landmark block may contain unknown extensions. 1 and 2 0 Y Y | Provision identifier | Reference in data format specification | Provision summary | Level | Status | Format type applicability | | IUT support | Supported range | Test result | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | P47 | Annex A | An expression block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P48 | Annex A | An identity metadata block may contain a pose angle block. | 1 and 2 | 0 | Y | Y | | | | | P49 | Annex A | A pose angle block may contain a yaw angle block. | 1 and 2 | 0 | Y | Y | | | | | P50 | Annex A | A yaw angle block may contain an angle value field. | 1 and 2 | 0 | Y | Y | | | | | P51 | Annex A | A yaw angle block may contain an angle uncertainty field. | 1 and 2 | 0 | Y | Y | | | | | P52 | Annex A | A yaw angle block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P53 | Annex A | A pose angle block may contain a pitch angle block. | 1 and 2 | 0 | Y | Y | | | | | P54 | Annex A | A pitch angle block may contain an angle value field. | 1 and 2 | 0 | Y | Y | | | | | P55 | Annex A | A pitch angle block may contain an angle uncertainty field. | 1 and 2 | 0 | Y | Y | | | | | P56 | Annex A | A pitch angle block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P57 | Annex A | A pose angle block may contain a roll angle block. | 1 and 2 | 0 | Y | Y | | | | | P58 | Annex A | A roll angle block may contain an angle value field. | 1 and 2 | 0 | Y | Y | | | | | P59 | Annex A | A roll angle block may contain an angle uncertainty field. | 1 and 2 | 0 | Y | Y | | | | | P60 | Annex A | A roll angle block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P61 | Annex A | An identity metadata block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P62 | Annex A | A representation block may contain landmarks blocks. | 1 and 2 | 0 | Y | Y | | | | | P63 | Annex A | A landmark block may contain a landmark kind value. | 1 and 2 | 0 | Y | Y | | | | | P64 | AnnexA | A landmark block may contain a landmark coordinates block. | 1 and 2 | 0 | Y | Y | | | | | P65 | Annex A | A landmark co-ordinates block may contain a 2D Cartesian coordinates block. | 1 and 2 | 0 | Y | Y | | | | | P66 | Annex A | A landmark co-ordinates block may contain a texture image coordinates block. | 1 and 2 | 0 | Y | Y | | | | | P67 | Annex A | A landmark coordinates block may contain a 3D Cartesian coordinates block. | 1 and 2 | 0 | Y | Y | | | | | P68 | Annex A | A landmark coordinates block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P69 | Annex A | A landmark block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | |
Table C. 2 (continued)
表格 C. 2(續)
Provision identifier  條款識別碼 Reference in data format specification
資料格式規範中之參照
Provision summary  條款摘要 Level  層級 Status  狀態 Format type applicability
格式類型適用性
IUT support  IUT 支援 Supported range  支援範圍 Test result  檢查結果
P70 Annex A  附錄 A A representation block may contain unknown extensions.
一個表示區塊可能包含未知的擴充功能。
1 and 2  1 和 2 0 Y Y
P71 Annex A  附錄 A An image representation may contain a 2D image representation block.
影像表示可能包含一個 2D 影像表示區塊。
1 and 2  1 和 2 0 Y Y
P72 Annex A  附錄 A A 2D image representation block may contain a 2D capture device block.
2D 影像表示區塊可能包含一個 2D 擷取裝置區塊。
1 and 2  1 和 2 0 Y Y
P73 Annex A  附錄 A A 2D capture device block may contain a 2D capture device spectral block.
2D 擷取裝置區塊可能包含一個 2D 擷取裝置光譜區塊。
1 and 2  1 和 2 0 Y Y
P74 Annex A  附錄 A A 2D capture device spectral block may contain a white-light field.
2D 擷取裝置光譜區塊可能包含一個白光場。
1 and 2  1 和 2 0 Y Y
P75 Annex A  附錄 A A2D capture device spectral block may contain a near-infrared field.
A2D 擷取裝置的光譜區塊可能包含近紅外線區域。
1 and 2  1 和 2 0 Y Y
P76 Annex A  附錄 A A 2D capture device spectral block may contain a thermal field.
A2D 擷取裝置的光譜區塊可能包含熱感應區域。
1 and 2  1 和 2 0 Y Y
P77 Annex A  附錄 A A 2D capture device spectral block may contain unknown extensions.
A2D 擷取裝置的光譜區塊可能包含未知擴展功能。
1 and 2  1 和 2 0 Y Y
P78 Annex A  附錄 A A 2D capture device block may contain a 2D capture device technology identifier field.
一個 2D 擷取裝置區塊可能包含一個 2D 擷取裝置技術識別碼欄位。
1 and 2  1 和 2 0 Y Y
P79 Annex A  附錄 A A 2D capture device block may contain unknown extensions.
一個 2D 擷取裝置區塊可能包含未知的擴充功能。
1 and 2  1 和 2 0 Y Y
P80 Annex A  附錄 A A 2D image information block may contain a 2D face image kind field.
一個二維影像資訊區塊可能包含一個二維人臉影像種類欄位。
1 and 2  1 和 2 0 Y Y
P81 Annex A  附錄 A A 2D image information block may contain a postacquisition processing block.
一個二維影像資訊區塊可能包含後採集處理區塊。
1 and 2  1 和 2 0 Y Y
P82 Annex A  附錄 A A post-acquisition processing block within a 2D image information block may contain a rotated field.
二維影像資訊區塊中的後採集處理區塊可能包含旋轉欄位。
1 and 2  1 和 2 0 Y Y
P83 Annex A  附錄 A A post-acquisition processing block within a 2D image information block may contain a cropped field.
二維影像資訊區塊中的後採集處理區塊可能包含裁切欄位。
1 and 2  1 和 2 0 Y Y
P84 Annex A  附錄 A A post-acquisition processing block within a 2D image information block may contain a down-sampled field.
二維影像資訊區塊中的後採集處理區塊可能包含降採樣欄位。
1 and 2  1 和 2 0 Y Y
P85 Annex A  附錄 A A post-acquisition processing block within a 2D image information block may contain a white-balance-adjusted field.
二維影像資訊區塊中的後採集處理區塊可能包含白平衡調整欄位。
1 and 2  1 和 2 0 Y Y
P86 Annex A  附錄 A A post-acquisition processing block within a 2D image information block may contain a multiplycompressed field.
收購後處理區塊在二維影像資訊區塊中可能包含多重壓縮欄位。
1 and 2  1 和 2 0 Y Y
Provision identifier Reference in data format specification Provision summary Level Status Format type applicability IUT support Supported range Test result P70 Annex A A representation block may contain unknown extensions. 1 and 2 0 Y Y P71 Annex A An image representation may contain a 2D image representation block. 1 and 2 0 Y Y P72 Annex A A 2D image representation block may contain a 2D capture device block. 1 and 2 0 Y Y P73 Annex A A 2D capture device block may contain a 2D capture device spectral block. 1 and 2 0 Y Y P74 Annex A A 2D capture device spectral block may contain a white-light field. 1 and 2 0 Y Y P75 Annex A A2D capture device spectral block may contain a near-infrared field. 1 and 2 0 Y Y P76 Annex A A 2D capture device spectral block may contain a thermal field. 1 and 2 0 Y Y P77 Annex A A 2D capture device spectral block may contain unknown extensions. 1 and 2 0 Y Y P78 Annex A A 2D capture device block may contain a 2D capture device technology identifier field. 1 and 2 0 Y Y P79 Annex A A 2D capture device block may contain unknown extensions. 1 and 2 0 Y Y P80 Annex A A 2D image information block may contain a 2D face image kind field. 1 and 2 0 Y Y P81 Annex A A 2D image information block may contain a postacquisition processing block. 1 and 2 0 Y Y P82 Annex A A post-acquisition processing block within a 2D image information block may contain a rotated field. 1 and 2 0 Y Y P83 Annex A A post-acquisition processing block within a 2D image information block may contain a cropped field. 1 and 2 0 Y Y P84 Annex A A post-acquisition processing block within a 2D image information block may contain a down-sampled field. 1 and 2 0 Y Y P85 Annex A A post-acquisition processing block within a 2D image information block may contain a white-balance-adjusted field. 1 and 2 0 Y Y P86 Annex A A post-acquisition processing block within a 2D image information block may contain a multiplycompressed field. 1 and 2 0 Y Y | Provision identifier | Reference in data format specification | Provision summary | Level | Status | Format type applicability | | IUT support | Supported range | Test result | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | P70 | Annex A | A representation block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P71 | Annex A | An image representation may contain a 2D image representation block. | 1 and 2 | 0 | Y | Y | | | | | P72 | Annex A | A 2D image representation block may contain a 2D capture device block. | 1 and 2 | 0 | Y | Y | | | | | P73 | Annex A | A 2D capture device block may contain a 2D capture device spectral block. | 1 and 2 | 0 | Y | Y | | | | | P74 | Annex A | A 2D capture device spectral block may contain a white-light field. | 1 and 2 | 0 | Y | Y | | | | | P75 | Annex A | A2D capture device spectral block may contain a near-infrared field. | 1 and 2 | 0 | Y | Y | | | | | P76 | Annex A | A 2D capture device spectral block may contain a thermal field. | 1 and 2 | 0 | Y | Y | | | | | P77 | Annex A | A 2D capture device spectral block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P78 | Annex A | A 2D capture device block may contain a 2D capture device technology identifier field. | 1 and 2 | 0 | Y | Y | | | | | P79 | Annex A | A 2D capture device block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P80 | Annex A | A 2D image information block may contain a 2D face image kind field. | 1 and 2 | 0 | Y | Y | | | | | P81 | Annex A | A 2D image information block may contain a postacquisition processing block. | 1 and 2 | 0 | Y | Y | | | | | P82 | Annex A | A post-acquisition processing block within a 2D image information block may contain a rotated field. | 1 and 2 | 0 | Y | Y | | | | | P83 | Annex A | A post-acquisition processing block within a 2D image information block may contain a cropped field. | 1 and 2 | 0 | Y | Y | | | | | P84 | Annex A | A post-acquisition processing block within a 2D image information block may contain a down-sampled field. | 1 and 2 | 0 | Y | Y | | | | | P85 | Annex A | A post-acquisition processing block within a 2D image information block may contain a white-balance-adjusted field. | 1 and 2 | 0 | Y | Y | | | | | P86 | Annex A | A post-acquisition processing block within a 2D image information block may contain a multiplycompressed field. | 1 and 2 | 0 | Y | Y | | | |
Table C. 2 (continued)
表 C. 2(續)
Provision identifier  條款識別碼 Reference in data format specification
資料格式規範中之參照
Provision summary  條款摘要 Level  等級 Status  狀態 Format type applicability
格式類型適用性
IUT support  IUT 支援 Supported range  支援範圍 Test result  檢查結果
P87 Annex A  附錄 A A post-acquisition processing block within a 2D image information block may contain an interpolated field.
二維影像資訊區塊中的後採集處理區塊可能包含一個插值欄位。
1 and 2  1 和 2 0 Y Y
P88 Annex A  附錄 A A post-acquisition processing block within a 2D image information block may contain a contrast-stretched field.
二維影像資訊區塊中的後採集處理區塊可能包含一個對比度拉伸欄位。
1 and 2  1 和 2 0 Y Y
P89 Annex A  附錄 A A post-acquisition processing block within a 2D image information block may contain a pose-corrected field.
二維影像資訊區塊中的後採集處理區塊可能包含一個姿勢校正欄位。
1 and 2  1 和 2 0 Y Y
P90 Annex A  附錄 A A post-acquisition processing block within a 2D image information block may contain a multi-view image field.
二維影像資訊區塊中的後採集處理區塊可能包含一個多視角影像欄位。
1 and 2  1 和 2 0 Y Y
P91 Annex A  附錄 A A post-acquisition processing block within a 2D image information block may contain an age-progressed field.
2D 影像資訊區塊中的後採集處理區塊可能包含年齡進展欄位。
1 and 2  1 和 2 0 Y Y
P92 Annex A  附錄 A A post-acquisition processing block within a 2D image information block may contain a super-resolution processed field.
在 2D 影像資訊區塊中的後採集處理區塊可能包含超解析度處理欄位。
1 and 2  1 和 2 0 Y Y
P93 Annex A  附錄 A A post-acquisition processing block within a 2D image information block may contain a normalised field.
在 2D 影像資訊區塊中的後獲取處理區塊可能包含一個標準化欄位。
1 and 2  1 和 2 0 Y Y
P94 Annex A  附錄 A A post-acquisition processing block within a 2D image information block may contain unknown extensions.
2D 影像資訊區塊中的後獲取處理區塊可能包含未知的擴充功能。
1 and 2  1 和 2 0 Y Y
P95 Annex A  附錄 A A 2D image information block may contain a lossytransformation attempts field.
一個二維影像資訊區塊可能包含一個有損轉換嘗試欄位。
1 and 2  1 和 2 0 Y Y
P96 Annex A  附錄 A A 2D image information block may contain a camera-to-subject distance field.
一個二維影像資訊區塊可能包含相機到主體距離欄位。
1 and 2  1 和 2 0 Y Y
P97 Annex A  附錄 A A 2D image information block may contain sensor diagonal field.
二維影像資訊區塊可能包含感測器對角線視野欄位。
1 and 2  1 和 2 0 Y Y
P98 Annex A  附錄 A A 2D image information block may contain a lens focal length field.
二維影像資訊區塊可能包含鏡頭焦距欄位。
1 and 2  1 和 2 0 Y Y
P99 Annex A  附錄 A A 2D image information block may contain an image size block.
二維影像資訊區塊可能包含一個影像尺寸區塊。
1 and 2  1 和 2 0 Y Y
P100 Annex A  附錄 A If the 2D image data format is unknown or other or a later version extension code, then an image size block (width and height) shall be included.
若二維影像資料格式為未知、其他或後續版本擴充功能代碼時,則應包含影像尺寸區塊(寬度與高度)。
1 and 2  1 和 2 0 Y Y
P101 Annex A  附錄 A An image size block may contain a width field.
影像尺寸區塊可能包含寬度欄位。
1 and 2  1 和 2 0 Y Y
P102 Annex A  附錄 A An image size block may contain a height field.
影像尺寸區塊可能包含高度欄位。
1 and 2  1 和 2 0 Y Y
P103 Annex A  附錄 A A 2D image information block may contain an image face measurements block.
2D 影像資訊區塊可能包含影像面部量測區塊。
1 and 2  1 和 2 0 Y Y
Provision identifier Reference in data format specification Provision summary Level Status Format type applicability IUT support Supported range Test result P87 Annex A A post-acquisition processing block within a 2D image information block may contain an interpolated field. 1 and 2 0 Y Y P88 Annex A A post-acquisition processing block within a 2D image information block may contain a contrast-stretched field. 1 and 2 0 Y Y P89 Annex A A post-acquisition processing block within a 2D image information block may contain a pose-corrected field. 1 and 2 0 Y Y P90 Annex A A post-acquisition processing block within a 2D image information block may contain a multi-view image field. 1 and 2 0 Y Y P91 Annex A A post-acquisition processing block within a 2D image information block may contain an age-progressed field. 1 and 2 0 Y Y P92 Annex A A post-acquisition processing block within a 2D image information block may contain a super-resolution processed field. 1 and 2 0 Y Y P93 Annex A A post-acquisition processing block within a 2D image information block may contain a normalised field. 1 and 2 0 Y Y P94 Annex A A post-acquisition processing block within a 2D image information block may contain unknown extensions. 1 and 2 0 Y Y P95 Annex A A 2D image information block may contain a lossytransformation attempts field. 1 and 2 0 Y Y P96 Annex A A 2D image information block may contain a camera-to-subject distance field. 1 and 2 0 Y Y P97 Annex A A 2D image information block may contain sensor diagonal field. 1 and 2 0 Y Y P98 Annex A A 2D image information block may contain a lens focal length field. 1 and 2 0 Y Y P99 Annex A A 2D image information block may contain an image size block. 1 and 2 0 Y Y P100 Annex A If the 2D image data format is unknown or other or a later version extension code, then an image size block (width and height) shall be included. 1 and 2 0 Y Y P101 Annex A An image size block may contain a width field. 1 and 2 0 Y Y P102 Annex A An image size block may contain a height field. 1 and 2 0 Y Y P103 Annex A A 2D image information block may contain an image face measurements block. 1 and 2 0 Y Y | Provision identifier | Reference in data format specification | Provision summary | Level | Status | Format type applicability | | IUT support | Supported range | Test result | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | P87 | Annex A | A post-acquisition processing block within a 2D image information block may contain an interpolated field. | 1 and 2 | 0 | Y | Y | | | | | P88 | Annex A | A post-acquisition processing block within a 2D image information block may contain a contrast-stretched field. | 1 and 2 | 0 | Y | Y | | | | | P89 | Annex A | A post-acquisition processing block within a 2D image information block may contain a pose-corrected field. | 1 and 2 | 0 | Y | Y | | | | | P90 | Annex A | A post-acquisition processing block within a 2D image information block may contain a multi-view image field. | 1 and 2 | 0 | Y | Y | | | | | P91 | Annex A | A post-acquisition processing block within a 2D image information block may contain an age-progressed field. | 1 and 2 | 0 | Y | Y | | | | | P92 | Annex A | A post-acquisition processing block within a 2D image information block may contain a super-resolution processed field. | 1 and 2 | 0 | Y | Y | | | | | P93 | Annex A | A post-acquisition processing block within a 2D image information block may contain a normalised field. | 1 and 2 | 0 | Y | Y | | | | | P94 | Annex A | A post-acquisition processing block within a 2D image information block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P95 | Annex A | A 2D image information block may contain a lossytransformation attempts field. | 1 and 2 | 0 | Y | Y | | | | | P96 | Annex A | A 2D image information block may contain a camera-to-subject distance field. | 1 and 2 | 0 | Y | Y | | | | | P97 | Annex A | A 2D image information block may contain sensor diagonal field. | 1 and 2 | 0 | Y | Y | | | | | P98 | Annex A | A 2D image information block may contain a lens focal length field. | 1 and 2 | 0 | Y | Y | | | | | P99 | Annex A | A 2D image information block may contain an image size block. | 1 and 2 | 0 | Y | Y | | | | | P100 | Annex A | If the 2D image data format is unknown or other or a later version extension code, then an image size block (width and height) shall be included. | 1 and 2 | 0 | Y | Y | | | | | P101 | Annex A | An image size block may contain a width field. | 1 and 2 | 0 | Y | Y | | | | | P102 | Annex A | An image size block may contain a height field. | 1 and 2 | 0 | Y | Y | | | | | P103 | Annex A | A 2D image information block may contain an image face measurements block. | 1 and 2 | 0 | Y | Y | | | |
Table C. 2 (continued)
表格 C. 2(續)
Provision identifier  條款識別碼 Reference in data format specification
資料格式規範中之參照
Provision summary  條款摘要 Level  等級 Status  狀態 Format type applicability
格式類型適用性
IUT support  IUT 支援 Supported range  支援範圍 Test result  檢查結果
P104 Annex A  附錄 A An image face measurements block may contain an image headwidth field.
影像臉部測量區塊可能包含影像頭寬欄位。
1 and 2  1 和 2 0 Y Y
P105 Annex A  附錄 A An image face measurements block may contain an image in-ter-eye distance field.
影像臉部測量區塊可能包含一個影像眼距欄位。
1 and 2  1 和 2 0 Y Y
P106 Annex A  附錄 A An image face measurements block may contain an image eye-to-mouth distance field.
影像臉部測量區塊可能包含一個影像眼至嘴距離欄位。
1 and 2  1 和 2 0 Y Y
P107 Annex A  附錄 A An image face measurements block may contain an image headlength field.
影像臉部測量區塊可能包含影像頭長度欄位。
1 and 2  1 和 2 0 Y Y
P108 Annex A  附錄 A An image face measurements block may contain unknown extensions.
影像臉部測量區塊可能包含未知的擴充功能。
1 and 2  1 和 2 0 Y Y
P109 Annex A  附錄 A A 2D image information block may contain an image colour space field.
2D 影像資訊區塊可能包含一個影像色彩空間欄位。
1 and 2  1 和 2 0 Y Y
P110 Annex A  附錄 A A 2D image information block may contain a reference colour mapping block.
2D 影像資訊區塊可能包含一個參考色彩映射區塊。
1 and 2  1 和 2 0 Y Y
P111 Annex A  附錄 A A reference colour mapping block may contain a reference colour scheme field.
參考色彩映射區塊可能包含一個參考色彩方案欄位。
1 and 2  1 和 2 0 Y Y
P112 Annex A  附錄 A A reference colour mapping block may contain reference colour definition-and-value blocks.
參考色彩映射區塊可能包含參考色彩定義與數值區塊。
1 and 2  1 和 2 0 Y Y
P113 Annex A  附錄 A A reference colour definition-and-value block may contain a reference colour definition field.
參考色彩定義與數值區塊可能包含一個參考色彩定義欄位。
1 and 2  1 和 2 0 Y Y
P114 Annex A  附錄 A A reference colour definition-and-value block may contain a reference colour value field.
參考色彩定義與數值區塊可能包含一個參考色彩數值欄位。
1 and 2  1 和 2 0 Y Y
P115 Annex A  附錄 A A reference colour definition-and-value block may contain unknown extensions.
參考色彩定義與數值區塊可能包含未知的延伸項目。
1 and 2  1 和 2 0 Y Y
P116 Annex A  附錄 A A reference colour mapping block may contain unknown extensions.
參考色彩映射區塊可能包含未知的延伸項目。
1 and 2  1 和 2 0 Y Y
P117 Annex A  附錄 A A 2D image information block may contain unknown extensions.
2D 影像資訊區塊可能包含未知的延伸項目。
1 and 2  1 和 2 0 Y Y
P118 Annex A  附錄 A A 2D image representation block may contain unknown extensions.
2D 影像呈現區塊可能包含未知的延伸項目。
1 and 2  1 和 2 0 Y Y
P119 Annex A  附錄 A An image representation may contain a 3D shape representation block.
影像表示可能包含 3D 形狀表示區塊。
1 and 2  1 和 2 0 Y Y
Provision identifier Reference in data format specification Provision summary Level Status Format type applicability IUT support Supported range Test result P104 Annex A An image face measurements block may contain an image headwidth field. 1 and 2 0 Y Y P105 Annex A An image face measurements block may contain an image in-ter-eye distance field. 1 and 2 0 Y Y P106 Annex A An image face measurements block may contain an image eye-to-mouth distance field. 1 and 2 0 Y Y P107 Annex A An image face measurements block may contain an image headlength field. 1 and 2 0 Y Y P108 Annex A An image face measurements block may contain unknown extensions. 1 and 2 0 Y Y P109 Annex A A 2D image information block may contain an image colour space field. 1 and 2 0 Y Y P110 Annex A A 2D image information block may contain a reference colour mapping block. 1 and 2 0 Y Y P111 Annex A A reference colour mapping block may contain a reference colour scheme field. 1 and 2 0 Y Y P112 Annex A A reference colour mapping block may contain reference colour definition-and-value blocks. 1 and 2 0 Y Y P113 Annex A A reference colour definition-and-value block may contain a reference colour definition field. 1 and 2 0 Y Y P114 Annex A A reference colour definition-and-value block may contain a reference colour value field. 1 and 2 0 Y Y P115 Annex A A reference colour definition-and-value block may contain unknown extensions. 1 and 2 0 Y Y P116 Annex A A reference colour mapping block may contain unknown extensions. 1 and 2 0 Y Y P117 Annex A A 2D image information block may contain unknown extensions. 1 and 2 0 Y Y P118 Annex A A 2D image representation block may contain unknown extensions. 1 and 2 0 Y Y P119 Annex A An image representation may contain a 3D shape representation block. 1 and 2 0 Y Y | Provision identifier | Reference in data format specification | Provision summary | Level | Status | Format type applicability | | IUT support | Supported range | Test result | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | P104 | Annex A | An image face measurements block may contain an image headwidth field. | 1 and 2 | 0 | Y | Y | | | | | P105 | Annex A | An image face measurements block may contain an image in-ter-eye distance field. | 1 and 2 | 0 | Y | Y | | | | | P106 | Annex A | An image face measurements block may contain an image eye-to-mouth distance field. | 1 and 2 | 0 | Y | Y | | | | | P107 | Annex A | An image face measurements block may contain an image headlength field. | 1 and 2 | 0 | Y | Y | | | | | P108 | Annex A | An image face measurements block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P109 | Annex A | A 2D image information block may contain an image colour space field. | 1 and 2 | 0 | Y | Y | | | | | P110 | Annex A | A 2D image information block may contain a reference colour mapping block. | 1 and 2 | 0 | Y | Y | | | | | P111 | Annex A | A reference colour mapping block may contain a reference colour scheme field. | 1 and 2 | 0 | Y | Y | | | | | P112 | Annex A | A reference colour mapping block may contain reference colour definition-and-value blocks. | 1 and 2 | 0 | Y | Y | | | | | P113 | Annex A | A reference colour definition-and-value block may contain a reference colour definition field. | 1 and 2 | 0 | Y | Y | | | | | P114 | Annex A | A reference colour definition-and-value block may contain a reference colour value field. | 1 and 2 | 0 | Y | Y | | | | | P115 | Annex A | A reference colour definition-and-value block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P116 | Annex A | A reference colour mapping block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P117 | Annex A | A 2D image information block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P118 | Annex A | A 2D image representation block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P119 | Annex A | An image representation may contain a 3D shape representation block. | 1 and 2 | 0 | Y | Y | | | |
Table C. 2 (continued)
表格 C. 2(續)
Provision identifier  條款識別碼 Reference in data format specification
資料格式規範中之參照
Provision summary  條款摘要 Level  層級 Status  狀態 Format type applicability
格式類型適用性
IUT support  IUT 支援 Supported range  支援範圍 Test result  檢查結果
P120 Annex A  附錄 A A 3D shape representation block may contain a 3D capture device block.
3D 形狀表示區塊可能包含 3D 擷取裝置區塊。
1 and 2  1 和 2 0 Y Y
P121 Annex A  附錄 A A 3D capture device technology block may contain a 3D modus field.
3D 擷取裝置技術區塊可能包含 3D 模式欄位。
1 and 2  1 和 2 0 Y Y
P122 Annex A  附錄 A A 3D capture device technology block may contain a 3D capture device technology kind field.
3D 擷取裝置技術區塊可能包含 3D 擷取裝置技術種類欄位。
1 and 2  1 和 2 0 Y Y
P123 Annex A  附錄 A A 3D capture device technology block may contain unknown extensions.
3D 擷取裝置技術區塊可能包含未知的延伸功能。
1 and 2  1 和 2 0 Y Y
P124 Annex A  附錄 A A 3D image information block may contain an image colour space field.
3D 影像資訊區塊可能包含影像色彩空間欄位。
1 and 2  1 和 2 0 Y Y
P125 Annex A  附錄 A A 3D image information block may contain a 3D face image kind field.
3D 影像資訊區塊可能包含 3D 人臉影像類型欄位。
1 and 2  1 和 2 0 Y Y
P126 Annex A  附錄 A A 3D image information block may contain an image size block.
3D 影像資訊區塊可能包含一個影像尺寸區塊。
1 and 2  1 和 2 0 Y Y
P127 Annex A  附錄 A If the 3D image data format is unknown or other or a later version extension code, then an image size block (width and height) shall be included.
若 3D 影像資料格式未知、為其他類型或後續版本的擴充功能代碼,則應包含影像尺寸區塊(寬度與高度)。
1 and 2  1 和 2 0 Y Y
P128 Annex A  附錄 A A 3D image information block may contain a 3D physical face measurements block.
3D 影像資訊區塊可能包含一個 3D 實體臉部測量區塊。
1 and 2  1 和 2 0 Y Y
P129 Annex A  附錄 A A physical face measurements block may contain a 3D physical head-width field.
實體臉部測量區塊可能包含一個 3D 實體頭部寬度欄位。
1 and 2  1 和 2 0 Y Y
P130 Annex A  附錄 A A physical face measurements block may contain a 3D physical inter-eye distance field.
實體臉部測量區塊可能包含一個 3D 實體眼距欄位。
1 and 2  1 和 2 0 Y Y
P131 Annex A  附錄 A A physical face measurements block may contain a 3D physical eye-to-mouth distance field.
實體臉部測量區塊可能包含一個 3D 實體眼口距離欄位。
1 and 2  1 和 2 0 Y Y
P132 Annex A  附錄 A A physical face measurements block may contain a 3D physical head-length field.
實體臉部測量區塊可能包含一個 3D 實體頭長度欄位。
1 and 2  1 和 2 0 Y Y
P133 Annex A  附錄 A A physical face measurements block may contain unknown extensions.
實體臉部測量區塊可能包含未知的擴展項目。
1 and 2  1 和 2 0 Y Y
P134 Annex A  附錄 A A 3D image information block may contain a postacquisition processing block.
3D 影像資訊區塊可能包含後採集處理區塊。
1 and 2  1 和 2 0 Y Y
P135 Annex A  附錄 A A post-acquisition processing block within a 3D image information block may contain a rotated field.
3D 影像資訊區塊中的後採集處理區塊可能包含旋轉欄位。
1 and 2  1 和 2 0 Y Y
Provision identifier Reference in data format specification Provision summary Level Status Format type applicability IUT support Supported range Test result P120 Annex A A 3D shape representation block may contain a 3D capture device block. 1 and 2 0 Y Y P121 Annex A A 3D capture device technology block may contain a 3D modus field. 1 and 2 0 Y Y P122 Annex A A 3D capture device technology block may contain a 3D capture device technology kind field. 1 and 2 0 Y Y P123 Annex A A 3D capture device technology block may contain unknown extensions. 1 and 2 0 Y Y P124 Annex A A 3D image information block may contain an image colour space field. 1 and 2 0 Y Y P125 Annex A A 3D image information block may contain a 3D face image kind field. 1 and 2 0 Y Y P126 Annex A A 3D image information block may contain an image size block. 1 and 2 0 Y Y P127 Annex A If the 3D image data format is unknown or other or a later version extension code, then an image size block (width and height) shall be included. 1 and 2 0 Y Y P128 Annex A A 3D image information block may contain a 3D physical face measurements block. 1 and 2 0 Y Y P129 Annex A A physical face measurements block may contain a 3D physical head-width field. 1 and 2 0 Y Y P130 Annex A A physical face measurements block may contain a 3D physical inter-eye distance field. 1 and 2 0 Y Y P131 Annex A A physical face measurements block may contain a 3D physical eye-to-mouth distance field. 1 and 2 0 Y Y P132 Annex A A physical face measurements block may contain a 3D physical head-length field. 1 and 2 0 Y Y P133 Annex A A physical face measurements block may contain unknown extensions. 1 and 2 0 Y Y P134 Annex A A 3D image information block may contain a postacquisition processing block. 1 and 2 0 Y Y P135 Annex A A post-acquisition processing block within a 3D image information block may contain a rotated field. 1 and 2 0 Y Y | Provision identifier | Reference in data format specification | Provision summary | Level | Status | Format type applicability | | IUT support | Supported range | Test result | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | P120 | Annex A | A 3D shape representation block may contain a 3D capture device block. | 1 and 2 | 0 | Y | Y | | | | | P121 | Annex A | A 3D capture device technology block may contain a 3D modus field. | 1 and 2 | 0 | Y | Y | | | | | P122 | Annex A | A 3D capture device technology block may contain a 3D capture device technology kind field. | 1 and 2 | 0 | Y | Y | | | | | P123 | Annex A | A 3D capture device technology block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P124 | Annex A | A 3D image information block may contain an image colour space field. | 1 and 2 | 0 | Y | Y | | | | | P125 | Annex A | A 3D image information block may contain a 3D face image kind field. | 1 and 2 | 0 | Y | Y | | | | | P126 | Annex A | A 3D image information block may contain an image size block. | 1 and 2 | 0 | Y | Y | | | | | P127 | Annex A | If the 3D image data format is unknown or other or a later version extension code, then an image size block (width and height) shall be included. | 1 and 2 | 0 | Y | Y | | | | | P128 | Annex A | A 3D image information block may contain a 3D physical face measurements block. | 1 and 2 | 0 | Y | Y | | | | | P129 | Annex A | A physical face measurements block may contain a 3D physical head-width field. | 1 and 2 | 0 | Y | Y | | | | | P130 | Annex A | A physical face measurements block may contain a 3D physical inter-eye distance field. | 1 and 2 | 0 | Y | Y | | | | | P131 | Annex A | A physical face measurements block may contain a 3D physical eye-to-mouth distance field. | 1 and 2 | 0 | Y | Y | | | | | P132 | Annex A | A physical face measurements block may contain a 3D physical head-length field. | 1 and 2 | 0 | Y | Y | | | | | P133 | Annex A | A physical face measurements block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P134 | Annex A | A 3D image information block may contain a postacquisition processing block. | 1 and 2 | 0 | Y | Y | | | | | P135 | Annex A | A post-acquisition processing block within a 3D image information block may contain a rotated field. | 1 and 2 | 0 | Y | Y | | | |
Table C. 2 (continued)
表格 C. 2(續)
Provision identifier  條款識別碼 Reference in data format specification
資料格式規範中之參照
Provision summary  條款摘要 Level  層級 Status  狀態 Format type applicability
格式類型適用性
IUT support  IUT 支援 Supported range  支援範圍 Test result  檢查結果
P136 Annex A  附錄 A A post-acquisition processing block within a 3D image information block may contain a cropped field.
3D 影像資訊區塊中的後採集處理區塊可能包含裁剪欄位。
1 and 2  1 和 2 0 Y Y
P137 Annex A  附錄 A A post-acquisition processing block within a 3D image information block may contain a down-sampled field.
3D 影像資訊區塊中的後採集處理區塊可能包含一個降採樣欄位。
1 and 2  1 和 2 0 Y Y
P138 Annex A  附錄 A A post-acquisition processing block within a 3D image information block may contain a white-balance-adjusted field.
3D 影像資訊區塊中的後採集處理區塊可能包含白平衡調整欄位。
1 and 2  1 和 2 0 Y Y
P139 Annex A  附錄 A A post-acquisition processing block within a 3D image information block may contain a multiply-compressed field.
3D 影像資訊區塊中的後採集處理區塊可能包含多重壓縮欄位。
1 and 2  1 和 2 0 Y Y
P140 Annex A  附錄 A A post-acquisition processing block within a 3D image information block may contain an interpolated field.
3D 影像資訊區塊中的後採集處理區塊可能包含一個插值欄位。
1 and 2  1 和 2 0 Y Y
P141 Annex A  附錄 A A post-acquisition processing block within a 3D image information block may contain a contrast-stretched field.
3D 影像資訊區塊中的後採集處理區塊可能包含一個對比度拉伸欄位。
1 and 2  1 和 2 0 Y Y
P142 Annex A  附錄 A A post-acquisition processing block within a 3D image information block may contain a pose-corrected field.
3D 影像資訊區塊中的後採集處理區塊可能包含一個姿勢校正欄位。
1 and 2  1 和 2 0 Y Y
P143 Annex A  附錄 A A post-acquisition processing block within a 3D image information block may contain a multi-view image field.
3D 影像資訊區塊中的後採集處理區塊可能包含一個多視角影像欄位。
1 and 2  1 和 2 0 Y Y
P144 Annex A  附錄 A A post-acquisition processing block within a 3D image information block may contain an age-progressed field.
3D 影像資訊區塊中的後採集處理區塊可能包含年齡進程欄位。
1 and 2  1 和 2 0 Y Y
P145 Annex A  附錄 A A post-acquisition processing block within a 3D image information block may contain a super-resolution processed field.
3D 影像資訊區塊中的後採集處理區塊可能包含超解析度處理欄位。
1 and 2  1 和 2 0 Y Y
P146 Annex A  附錄 A A post-acquisition processing block within a 3D image information block may contain a normalised field.
3D 影像資訊區塊中的後採集處理區塊可能包含標準化欄位。
1 and 2  1 和 2 0 Y Y
P147 Annex A  附錄 A A post-acquisition processing block within a 3D image information block may contain unknown extensions.
3D 影像資訊區塊中的後採集處理區塊可能包含未知擴充項目。
1 and 2  1 和 2 0 Y Y
P148 Annex A  附錄 A A 3D image information block may contain a 3D textured image resolution block.
3D 影像資訊區塊可能包含一個 3D 紋理影像解析度區塊。
1 and 2  1 和 2 0 Y Y
P149 Annex A  附錄 A A 3D textured image resolution block may contain a 3D mm shape x resolution field.
3D 紋理影像解析度區塊可能包含一個 3D 毫米形狀 x 軸解析度欄位。
1 and 2  1 和 2 0 Y Y
P150 Annex A  附錄 A A 3D textured image resolution block may contain a 3D mm shape y resolution field.
3D 紋理影像解析度區塊可能包含一個 3D 毫米形狀 y 軸解析度欄位。
1 and 2  1 和 2 0 Y Y
Provision identifier Reference in data format specification Provision summary Level Status Format type applicability IUT support Supported range Test result P136 Annex A A post-acquisition processing block within a 3D image information block may contain a cropped field. 1 and 2 0 Y Y P137 Annex A A post-acquisition processing block within a 3D image information block may contain a down-sampled field. 1 and 2 0 Y Y P138 Annex A A post-acquisition processing block within a 3D image information block may contain a white-balance-adjusted field. 1 and 2 0 Y Y P139 Annex A A post-acquisition processing block within a 3D image information block may contain a multiply-compressed field. 1 and 2 0 Y Y P140 Annex A A post-acquisition processing block within a 3D image information block may contain an interpolated field. 1 and 2 0 Y Y P141 Annex A A post-acquisition processing block within a 3D image information block may contain a contrast-stretched field. 1 and 2 0 Y Y P142 Annex A A post-acquisition processing block within a 3D image information block may contain a pose-corrected field. 1 and 2 0 Y Y P143 Annex A A post-acquisition processing block within a 3D image information block may contain a multi-view image field. 1 and 2 0 Y Y P144 Annex A A post-acquisition processing block within a 3D image information block may contain an age-progressed field. 1 and 2 0 Y Y P145 Annex A A post-acquisition processing block within a 3D image information block may contain a super-resolution processed field. 1 and 2 0 Y Y P146 Annex A A post-acquisition processing block within a 3D image information block may contain a normalised field. 1 and 2 0 Y Y P147 Annex A A post-acquisition processing block within a 3D image information block may contain unknown extensions. 1 and 2 0 Y Y P148 Annex A A 3D image information block may contain a 3D textured image resolution block. 1 and 2 0 Y Y P149 Annex A A 3D textured image resolution block may contain a 3D mm shape x resolution field. 1 and 2 0 Y Y P150 Annex A A 3D textured image resolution block may contain a 3D mm shape y resolution field. 1 and 2 0 Y Y | Provision identifier | Reference in data format specification | Provision summary | Level | Status | Format type applicability | | IUT support | Supported range | Test result | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | P136 | Annex A | A post-acquisition processing block within a 3D image information block may contain a cropped field. | 1 and 2 | 0 | Y | Y | | | | | P137 | Annex A | A post-acquisition processing block within a 3D image information block may contain a down-sampled field. | 1 and 2 | 0 | Y | Y | | | | | P138 | Annex A | A post-acquisition processing block within a 3D image information block may contain a white-balance-adjusted field. | 1 and 2 | 0 | Y | Y | | | | | P139 | Annex A | A post-acquisition processing block within a 3D image information block may contain a multiply-compressed field. | 1 and 2 | 0 | Y | Y | | | | | P140 | Annex A | A post-acquisition processing block within a 3D image information block may contain an interpolated field. | 1 and 2 | 0 | Y | Y | | | | | P141 | Annex A | A post-acquisition processing block within a 3D image information block may contain a contrast-stretched field. | 1 and 2 | 0 | Y | Y | | | | | P142 | Annex A | A post-acquisition processing block within a 3D image information block may contain a pose-corrected field. | 1 and 2 | 0 | Y | Y | | | | | P143 | Annex A | A post-acquisition processing block within a 3D image information block may contain a multi-view image field. | 1 and 2 | 0 | Y | Y | | | | | P144 | Annex A | A post-acquisition processing block within a 3D image information block may contain an age-progressed field. | 1 and 2 | 0 | Y | Y | | | | | P145 | Annex A | A post-acquisition processing block within a 3D image information block may contain a super-resolution processed field. | 1 and 2 | 0 | Y | Y | | | | | P146 | Annex A | A post-acquisition processing block within a 3D image information block may contain a normalised field. | 1 and 2 | 0 | Y | Y | | | | | P147 | Annex A | A post-acquisition processing block within a 3D image information block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P148 | Annex A | A 3D image information block may contain a 3D textured image resolution block. | 1 and 2 | 0 | Y | Y | | | | | P149 | Annex A | A 3D textured image resolution block may contain a 3D mm shape x resolution field. | 1 and 2 | 0 | Y | Y | | | | | P150 | Annex A | A 3D textured image resolution block may contain a 3D mm shape y resolution field. | 1 and 2 | 0 | Y | Y | | | |
Table C. 2 (continued)
表 C. 2(續)
Provision identifier  條款識別碼 Reference in data format specification
資料格式規範中之參照
Provision summary  條款摘要 Level  層級 Status  狀態 Format type applicability
格式類型適用性
IUT support  IUT 支援 Supported range  支援範圍 Test result  檢查結果
P151 Annex A  附錄 A A 3D textured image resolution block may contain a 3D mm shape z resolution field.
3D 紋理影像解析度區塊可能包含一個 3D 毫米形狀 z 軸解析度欄位。
1 and 2  1 和 2 0 Y Y
P152 Annex A  附錄 A A 3D textured image resolution block may contain a 3D mm texture resolution field.
3D 紋理影像解析度區塊可能包含 3D 毫米紋理解析度欄位。
1 and 2  1 和 2 0 Y Y
P153 Annex A  附錄 A A 3D textured image resolution block may contain a 3D texture acquisition period field.
3D 紋理影像解析度區塊可能包含 3D 紋理擷取週期欄位。
1 and 2  1 和 2 0 Y Y
P154 Annex A  附錄 A A 3D textured image resolution block may contain a 3D face area scanned block.
3D 紋理影像解析度區塊可能包含 3D 臉部區域掃描區塊。
1 and 2  1 和 2 0 Y Y
P155 Annex A  附錄 A A 3D face area scanned block may contain a front-of-the-head field.
3D 臉部區域掃描區塊可能包含頭部正面欄位。
1 and 2  1 和 2 0 Y Y
P156 Annex A  附錄 A A 3D face area scanned block may contain a chin field.
3D 臉部區域掃描區塊可能包含下巴欄位。
1 and 2  1 和 2 0 Y Y
P157 Annex A  附錄 A A 3D face area scanned block may contain an ears field.
3D 臉部區域掃描區塊可能包含耳朵欄位。
1 and 2  1 和 2 0 Y Y
P158 Annex A  附錄 A A 3D face area scanned block may contain a neck field.
3D 臉部掃描區塊可能包含頸部區域。
1 and 2  1 和 2 0 Y Y
P159 Annex A  附錄 A A 3D face area scanned block may contain a back-of-the-head field.
3D 臉部掃描區塊可能包含後腦勺欄位。
1 and 2  1 和 2 0 Y Y
P160 Annex A  附錄 A A 3D face area scanned block may contain a full-head field.
3D 臉部掃描區塊可能包含全頭部欄位。
1 and 2  1 和 2 0 Y Y
P161 Annex A  附錄 A A 3D face area scanned block may contain unknown extensions.
3D 臉部掃描區塊可能包含未知的擴展功能。
1 and 2  1 和 2 0 Y Y
P162 Annex A  附錄 A A 3D textured image resolution block may contain unknown extensions.
3D 紋理影像解析度區塊可能包含未知的擴充功能。
1 and 2  1 和 2 0 Y Y
P163 Annex A  附錄 A A 3D image information block may contain a 3D texture map block.
3D 影像資訊區塊可能包含 3D 紋理貼圖區塊。
1 and 2  1 和 2 0 Y Y
P164 Annex A  附錄 A A 3D texture map block may contain an image data format field.
3D 紋理貼圖區塊可能包含影像資料格式欄位。
1 and 2  1 和 2 0 Y Y
P165 Annex A  附錄 A A 3D texture map block may contain a 3D texture capture device spectral block spectrum field.
3D 紋理映射區塊可能包含一個 3D 紋理擷取裝置的光譜區塊光譜欄位。
1 and 2  1 和 2 0 Y Y
P166 Annex A  附錄 A A 3D texture map block may contain a texture 3D texture standard illuminant field.
3D 紋理映射區塊可能包含一個 3D 紋理標準光源欄位。
1 and 2  1 和 2 0 Y Y
P167 Annex A  附錄 A A 3D texture map block may contain a 3D error map field.
一個 3D 紋理貼圖區塊可能包含一個 3D 誤差圖欄位。
1 and 2  1 和 2 0 Y Y
P168 Annex A  附錄 A A 3D texture map block may contain unknown extensions.
3D 紋理映射區塊可能包含未知的擴展功能。
1 and 2  1 和 2 0 Y Y
P169 Annex A  附錄 A A 3D image information block may contain unknown extensions.
3D 影像資訊區塊可能包含未知的擴展功能。
1 and 2  1 和 2 0 Y Y
P170 Annex A  附錄 A A 3D shape representation block may contain unknown extensions.
3D 形狀表示區塊可能包含未知的擴展功能。
1 and 2  1 和 2 0 Y Y
Provision identifier Reference in data format specification Provision summary Level Status Format type applicability IUT support Supported range Test result P151 Annex A A 3D textured image resolution block may contain a 3D mm shape z resolution field. 1 and 2 0 Y Y P152 Annex A A 3D textured image resolution block may contain a 3D mm texture resolution field. 1 and 2 0 Y Y P153 Annex A A 3D textured image resolution block may contain a 3D texture acquisition period field. 1 and 2 0 Y Y P154 Annex A A 3D textured image resolution block may contain a 3D face area scanned block. 1 and 2 0 Y Y P155 Annex A A 3D face area scanned block may contain a front-of-the-head field. 1 and 2 0 Y Y P156 Annex A A 3D face area scanned block may contain a chin field. 1 and 2 0 Y Y P157 Annex A A 3D face area scanned block may contain an ears field. 1 and 2 0 Y Y P158 Annex A A 3D face area scanned block may contain a neck field. 1 and 2 0 Y Y P159 Annex A A 3D face area scanned block may contain a back-of-the-head field. 1 and 2 0 Y Y P160 Annex A A 3D face area scanned block may contain a full-head field. 1 and 2 0 Y Y P161 Annex A A 3D face area scanned block may contain unknown extensions. 1 and 2 0 Y Y P162 Annex A A 3D textured image resolution block may contain unknown extensions. 1 and 2 0 Y Y P163 Annex A A 3D image information block may contain a 3D texture map block. 1 and 2 0 Y Y P164 Annex A A 3D texture map block may contain an image data format field. 1 and 2 0 Y Y P165 Annex A A 3D texture map block may contain a 3D texture capture device spectral block spectrum field. 1 and 2 0 Y Y P166 Annex A A 3D texture map block may contain a texture 3D texture standard illuminant field. 1 and 2 0 Y Y P167 Annex A A 3D texture map block may contain a 3D error map field. 1 and 2 0 Y Y P168 Annex A A 3D texture map block may contain unknown extensions. 1 and 2 0 Y Y P169 Annex A A 3D image information block may contain unknown extensions. 1 and 2 0 Y Y P170 Annex A A 3D shape representation block may contain unknown extensions. 1 and 2 0 Y Y | Provision identifier | Reference in data format specification | Provision summary | Level | Status | Format type applicability | | IUT support | Supported range | Test result | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | P151 | Annex A | A 3D textured image resolution block may contain a 3D mm shape z resolution field. | 1 and 2 | 0 | Y | Y | | | | | P152 | Annex A | A 3D textured image resolution block may contain a 3D mm texture resolution field. | 1 and 2 | 0 | Y | Y | | | | | P153 | Annex A | A 3D textured image resolution block may contain a 3D texture acquisition period field. | 1 and 2 | 0 | Y | Y | | | | | P154 | Annex A | A 3D textured image resolution block may contain a 3D face area scanned block. | 1 and 2 | 0 | Y | Y | | | | | P155 | Annex A | A 3D face area scanned block may contain a front-of-the-head field. | 1 and 2 | 0 | Y | Y | | | | | P156 | Annex A | A 3D face area scanned block may contain a chin field. | 1 and 2 | 0 | Y | Y | | | | | P157 | Annex A | A 3D face area scanned block may contain an ears field. | 1 and 2 | 0 | Y | Y | | | | | P158 | Annex A | A 3D face area scanned block may contain a neck field. | 1 and 2 | 0 | Y | Y | | | | | P159 | Annex A | A 3D face area scanned block may contain a back-of-the-head field. | 1 and 2 | 0 | Y | Y | | | | | P160 | Annex A | A 3D face area scanned block may contain a full-head field. | 1 and 2 | 0 | Y | Y | | | | | P161 | Annex A | A 3D face area scanned block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P162 | Annex A | A 3D textured image resolution block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P163 | Annex A | A 3D image information block may contain a 3D texture map block. | 1 and 2 | 0 | Y | Y | | | | | P164 | Annex A | A 3D texture map block may contain an image data format field. | 1 and 2 | 0 | Y | Y | | | | | P165 | Annex A | A 3D texture map block may contain a 3D texture capture device spectral block spectrum field. | 1 and 2 | 0 | Y | Y | | | | | P166 | Annex A | A 3D texture map block may contain a texture 3D texture standard illuminant field. | 1 and 2 | 0 | Y | Y | | | | | P167 | Annex A | A 3D texture map block may contain a 3D error map field. | 1 and 2 | 0 | Y | Y | | | | | P168 | Annex A | A 3D texture map block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P169 | Annex A | A 3D image information block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P170 | Annex A | A 3D shape representation block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | |
Table C. 2 (continued)
表 C. 2(續)
Provision identifier  條款識別碼 Reference in data format specification
資料格式規範中之參照
Provision summary  條款摘要 Level  層級 Status  狀態 Format type applicability
格式類型適用性
IUT support  IUT 支援 Supported range  支援範圍 Test result  檢查結果
P171 Annex A  附錄 A A 3D representation kind block may contain a 3D vertex block.
3D 表示種類區塊可能包含 3D 頂點區塊。
1 and 2  1 和 2 0 Y Y
P172 Annex A  附錄 A A 3D vertex block may contain a 3D vertex information block.
一個 3D 頂點區塊可能包含一個 3D 頂點資訊區塊。
1 and 2  1 和 2 0
P173 AnnexA  附錄 A A 3D vertex information block may contain a 3D vertex identifier.
一個 3D 頂點資訊區塊可能包含一個 3D 頂點識別碼。
1 and 2  1 和 2 0 Y Y
P174 Annex A  附錄 A A3D vertex information block may contain a 3D vertex normals block.
一個 3D 頂點資訊區塊可能包含一個 3D 頂點法線區塊。
1 and 2  1 和 2 0 Y Y
P175 Annex A  附錄 A A 3D vertex information block may contain a 3D textures block.
一個 3D 頂點資訊區塊可能包含一個 3D 紋理區塊。
1 and 2  1 和 2 0 Y Y
P176 Annex A  附錄 A A 3D vertex information block may contain a 3D error map field.
3D 頂點資訊區塊可能包含 3D 誤差圖欄位。
1 and 2  1 和 2 0 Y Y
P177 Annex A  附錄 A A 3D vertex information block may contain unknown extensions.
3D 頂點資訊區塊可能包含未知擴充功能。
1 and 2  1 和 2 0 Y Y
P178 Annex A  附錄 A A 3D vertex block may contain a 3D vertex triangle data block.
3D 頂點區塊可能包含 3D 頂點三角形資料區塊。
1 and 2  1 和 2 0 Y Y
P179 Annex A  附錄 A A 3D representation kind may contain unknown extensions.
3D 表示類型可能包含未知擴充功能。
1 and 2  1 和 2 0 Y Y
Provision identifier Reference in data format specification Provision summary Level Status Format type applicability IUT support Supported range Test result P171 Annex A A 3D representation kind block may contain a 3D vertex block. 1 and 2 0 Y Y P172 Annex A A 3D vertex block may contain a 3D vertex information block. 1 and 2 0 P173 AnnexA A 3D vertex information block may contain a 3D vertex identifier. 1 and 2 0 Y Y P174 Annex A A3D vertex information block may contain a 3D vertex normals block. 1 and 2 0 Y Y P175 Annex A A 3D vertex information block may contain a 3D textures block. 1 and 2 0 Y Y P176 Annex A A 3D vertex information block may contain a 3D error map field. 1 and 2 0 Y Y P177 Annex A A 3D vertex information block may contain unknown extensions. 1 and 2 0 Y Y P178 Annex A A 3D vertex block may contain a 3D vertex triangle data block. 1 and 2 0 Y Y P179 Annex A A 3D representation kind may contain unknown extensions. 1 and 2 0 Y Y | Provision identifier | Reference in data format specification | Provision summary | Level | Status | Format type applicability | | IUT support | Supported range | Test result | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | P171 | Annex A | A 3D representation kind block may contain a 3D vertex block. | 1 and 2 | 0 | Y | Y | | | | | P172 | Annex A | A 3D vertex block may contain a 3D vertex information block. | 1 and 2 | 0 | | | | | | | P173 | AnnexA | A 3D vertex information block may contain a 3D vertex identifier. | 1 and 2 | 0 | Y | Y | | | | | P174 | Annex A | A3D vertex information block may contain a 3D vertex normals block. | 1 and 2 | 0 | Y | Y | | | | | P175 | Annex A | A 3D vertex information block may contain a 3D textures block. | 1 and 2 | 0 | Y | Y | | | | | P176 | Annex A | A 3D vertex information block may contain a 3D error map field. | 1 and 2 | 0 | Y | Y | | | | | P177 | Annex A | A 3D vertex information block may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | | | P178 | Annex A | A 3D vertex block may contain a 3D vertex triangle data block. | 1 and 2 | 0 | Y | Y | | | | | P179 | Annex A | A 3D representation kind may contain unknown extensions. | 1 and 2 | 0 | Y | Y | | | |

ISO/IEC 39794-5:2019(E)

IUT support notes  IUT 支援備註

To be filled in by the supplier of the IUT on the copy of this table provided to the testing laboratory and to be included in the copy of this table that forms part of the test report.
由 IUT 供應商填寫於提供給測試實驗室的表格副本中,並須納入作為測試報告一部分的表格副本內。

Test result notes  檢查結果備註

To be filled in by the testing laboratory, if necessary, during the execution of the conformance test and to be included in the copy of this table that forms part of the test report.
由測試實驗室在執行符合性測試時視需要填寫,並須納入作為測試報告一部分的表格副本內。

C. 3 Conformance test assertions
C. 3 符合性測試斷言

Level 1 and 2 requirements and options shall be tested by:
第一級與第二級需求及選項應透過以下方式進行測試:
  • decoding tagged binary data blocks under test based on the ASN. 1 module that specifies the tagged binary data format; or
    根據指定標記二進位資料格式的 ASN.1 模組,解碼受測標記二進位資料區塊;或
  • validation of XML documents under test against the XML schema definition that specifies the textual data format, respectively.
    分別針對指定文字資料格式的 XML 結構定義,驗證受測 XML 文件。

C. 4 Conformance testing for profiles given in Annex D
C.4 附錄 D 所載應用設定檔之符合性測試

This clause specifies conformance testing methodologies for the specific requirements according to the application profiles as given in Annex D.
本條款針對附錄 D 所列應用設定檔之特定要求,規範其符合性測試方法。
NOTE Currently, no conformance testing methodologies for the application profiles in Annex D are available for this document.
註:目前本文件尚未提供附錄 D 應用設定檔之符合性測試方法。

Annex D (normative)  附錄 D(規範性)

Application profiles  應用設定檔

D. 1 Reference face image for Machine Readable Travel Documents (MRTD)
D.1 機器可讀旅行證件(MRTD)的參考人臉影像

D.1.1 General  D.1.1 概述

ICAO Doc 9303 provides the basic functional specification for MRTDs and describes all relevant properties of MRTDs.
國際民航組織(ICAO)文件 9303 提供了 MRTD 的基本功能規範,並描述了 MRTD 的所有相關特性。
The face portrait printed on the ICAO compliant MRTD is an essential element of that document and one of the most important information carriers binding the document to the holder. A standardized face portrait produced at a high quality helps issuing agencies to screen identity and border agencies to inspect the travel document manually or via automated processing.
印製於符合國際民航組織(ICAO)規範之機讀旅行證件(MRTD)上的人臉肖像,是該文件的重要組成要素,亦是將文件與持有人進行綁定的關鍵資訊載體之一。高品質的標準化人臉肖像有助於發證機關進行身份核驗,並使邊境查驗單位能透過人工或自動化程序審查旅行證件。
After the introduction of the digitally stored image in 2005, ABC (Automated Border Control) systems have been introduced to perform automated comparison of the individual and the electronically stored image. Those A B C A B C ABCA B C systems compare, whether it is manually or automated, the printed image and/or the electronically stored image and the image taken live while crossing a border.
自 2005 年導入數位化儲存影像後,自動化邊境管制(ABC)系統已廣泛運用於執行個人與電子儲存影像的自動比對。無論是人工或自動化流程,這些 A B C A B C ABCA B C 系統皆會比對印刷影像、電子儲存影像以及通關時即時拍攝的影像。
This annex contains significant content from Reference [38].
本附錄包含參考文獻[38]之重要內容。

D.1.2 Overview  D.1.2 概述

This annex describes the requirements and best practice recommendations to be applied for face portrait capturing in the application case of enrolment of biometric reference data for electronic MRTD. In this sense, this annex is an application profile.
本附錄描述在電子化機器可讀旅行證件(MRTD)生物特徵參考資料註冊應用案例中,臉部肖像擷取所需遵循的要求與最佳實務建議。在此意義上,本附錄屬應用規範。
This annex:  本附錄:
  • shares the lessons learned using the stored and displayed face portrait in an MRTD,
    分享使用儲存與顯示於 MRTD 之臉部肖像所獲得的經驗教訓,
  • describes how the face portraits should be captured that serve as the content of this document and its data structures,
    闡述應如何擷取作為本文件及其資料結構內容的臉部肖像,
  • provides the experiences made applying face recognition technology in ABC gates, manual border control, identity screening, and other applications based on the face portraits provided by electronic MRTDs. It also gives guidance on the requirements for capturing and processing face portraits contained in MRTDs to support the inspection process, and
    提供在 ABC 閘門、人工邊境管制、身份篩檢等應用場景中,運用基於電子機讀旅行證件(MRTDs)所提供人臉肖像進行臉部辨識技術的實務經驗。同時也針對 MRTDs 中所含人臉肖像的擷取與處理需求提供指導方針,以支援查驗流程。
  • provides comprehensive recommendations for face portrait capturing including scene, photographic and digital requirements.
    提供包含場景、攝影與數位需求在內的人臉肖像擷取全面性建議。
The following topics are not in scope of this annex, requirements on them are given in ICAO Doc 9303:
本附錄不涵蓋以下主題,相關要求請參閱 ICAO 9303 號文件:
  • image printing and scanning as well as on digital image processing,
    影像列印與掃描以及數位影像處理相關規範,
  • face portraits printed on MRTDs to ensure good visibility for inspection,
    印在機讀旅行證件上的人臉肖像,以確保檢查時具有良好的可見度,
  • guidance for reader system manufacturers on the use of unified reflection free illumination and view angles, and
    為讀取器系統製造商提供關於使用統一無反射照明和視角的指導,以及
  • image capturing for verification and/or identification applications like ABC , even if many of the requirements listed in this document apply to such images, too.
    用於驗證和/或識別應用(如自動邊境管制)的影像擷取,即使本文件所列的許多要求也適用於此類影像。

ISO/IEC 39794-5:2019(E)

The following topics are not in scope of this annex:
以下主題不在本附錄的範圍內:
  • Definition of image data formats like JPEG, JPEG2000, PNG,
    定義影像資料格式,例如 JPEG、JPEG2000、PNG
  • Security aspects like digital image electronic signature, presentation attack detection (PAD), and morphing prevention.
    安全層面如數位影像電子簽章、呈現攻擊偵測(PAD)與變形預防
For certain criteria, there may be two different levels given in a table form: A minimum requirement and a best practice recommendation. The requirement gives the minimum acceptable values or value ranges in order to reach compliance. The best practice recommendation gives values that will result in better overall performance or quality, and users are encouraged to adopt best practice values whenever possible. See Table D.1.
針對特定標準,可能以表格形式提供兩種不同等級:最低要求與最佳實務建議。要求部分提供為達合規可接受的最低值或數值範圍;最佳實務建議則提供能帶來更佳整體效能或品質的數值,並鼓勵使用者盡可能採用最佳實務數值。詳見表 D.1
Table D. 1 - Sample table summarizing minimum requirements and best practice recommendations
表 D.1 - 彙整最低要求與最佳實務建議的範例表格
Criterion  準則 Requirement  要求 The criterion shall be ...
該準則應...
Best practice  最佳實務 The criterion should be ...
該標準應為...
Criterion Requirement The criterion shall be ... Best practice The criterion should be ...| Criterion | Requirement | The criterion shall be ... | | :--- | :--- | :--- | | | Best practice | The criterion should be ... |

D.1.3 Face portraits  D.1.3 人臉肖像

D.1.3.1 Uses of face portrait images
D.1.3.1 人臉肖像圖像用途

Face portraits appear in several places on and in an MRTD:
人臉肖像會出現在機讀旅行證件(MRTD)的多處位置:
  • As a printed image on the data page (Zone V as defined in ICAO Doc 9303),
    作為印製在資料頁上的圖像(依據國際民航組織 9303 文件定義的 V 區),
  • As a digital image stored in the RFID chip,
    作為儲存於 RFID 晶片中的數位圖像,
  • Optionally, as a secondary image on the data and/or observation page, e.g., as a changeable laser image, as a micro-perforation, or as a background print.
    可選擇性地作為資料頁和/或觀察頁上的次要圖像,例如可變雷射圖像、微穿孔或背景印刷。
All the images used shall be derived from the same captured face portrait. However, the technical requirements of each of the images may differ depending on the applied technology.
所有使用的圖像應源自同一張拍攝的臉部肖像。然而,根據所採用的技術不同,各圖像的技術要求可能有所差異。
The intended use for a printed face portrait is to give a good physical representation of the document holder and to allow for a human comparison of the face portrait and the holder of the MRTD. Physical security features and the printing technology may interact or influence the face portrait which needs to be considered as part of the comparison process.
印刷人像照片的主要用途是提供文件持有者的良好實體呈現,並允許人工比對人像照片與機讀旅行證件(MRTD)持有者。實體安全特徵與印刷技術可能會影響人像照片的呈現,這點需納入比對流程中考量。
The intended use of the face portrait image digitally stored in the chip is such that the image can be compared to the printed face portrait and the human via manual processes or compared to a live image via automated processes in a 1:1 od 1:N application case. Because of the way the image is stored on the MRTD (see Doc 9303 Part 11), border agencies can confirm that the image has been stored on the MRTD by the issuing authority and remains unaltered or unsubstituted. The digital image is the primary image used for biometric comparison.
儲存於晶片中的數位人像照片,其設計用途在於能與印刷人像照片進行人工比對,或於 1:1 或 1:N 應用情境中透過自動化流程與即時影像比對。由於影像儲存於 MRTD 的方式(參見 Doc 9303 第 11 部分),邊境機關可確認該影像確由發證單位儲存於 MRTD 中且未遭竄改或替換。此數位影像為生物特徵比對的主要依據。
The secondary images serve as physical security features protecting the printed face portrait. Therefore their appearance shall be the same as the printed face portrait. However, size and production technology determine the technical requirements of the face portrait derivative used here.
次要影像作為保護印刷人像照片的實體安全特徵,因此其外觀應與印刷人像照片一致。然而,尺寸與製作技術決定了此處所用衍生人像照片的技術要求。
Figure D. 1 displays the appearance of the face portrait as a printed image and as a digitally stored image.
圖 D.1 顯示人像照片作為印刷圖像和數位儲存圖像的外觀。

Key  圖例

1 printed image  1 印刷圖像
2 digitally stored image
2 數位儲存圖像
Figure D. 1 - Face portraits to be used on/in an MRTD
圖 D.1 - 用於機讀旅行證件(MRTD)上/內的人臉肖像
Two main types of application processes are considered, those based on:
考量兩種主要申請流程類型,分別基於:
  • submission of printed photographs provided by the citizen to the passport authority, and on
    由公民提供印刷照片提交給護照簽發機關,以及
  • electronic face portrait submission.
    電子人臉肖像提交方式。
There are two sub-types of the second type:
第二種類型有兩個子類型:
  • live capture where the applicant has the photo taken during an interview or application submission, and
    現場拍攝,即申請人在面試或提交申請時當場拍攝照片,以及
  • upload, where the image is provided electronically by the applicant, by an enrolment bureau or by an accredited ID photo service.
    上傳,即申請人、註冊局或經認證的身份證照片服務以電子方式提供圖像。
These two sub-types are subject to the same requirements. Depending on the process type, different clauses of this document apply, as defined below. Both main types are shown in Figure D.2.
這兩種子類型適用相同的要求。根據流程類型的不同,本文件的不同條款適用,如下所述。兩種主要類型如圖 D.2 所示。
The production of the printed as well as of the electronic face portraits may be done by automated photo kiosks, officers of passport authorities, photo booths or photographers. It is essential that the quality requirements are met. Photographic experts should be consulted before introducing a new enrolment solution.
印刷版及電子版人像照片的製作可由自動照相亭、護照機關人員、快照亭或攝影師完成。關鍵在於必須符合品質要求。在導入新的登記解決方案前,應諮詢攝影專家意見。

Figure D. 2 - Face portrait enrolment process variations
圖 D. 2 - 人臉肖像註冊流程變體

D.1.3.2 Passport application using printed face portraits
D.1.3.2 使用印刷臉部肖像之護照申請

For a passport application process that uses printed face portraits the citizen typically visits a photographer or photo booth to obtain such a face portrait. In all cases the citizen receives printed photos, and there is no electronic submission or linkage to an electronically stored image available. Then the citizen submits such a photo to the passport issuing authority as part of their application. To establish a passport application process using printed face portraits, D.1.4 and D.1.5 apply. Additional requirements on image printing for submission purposes, scanning of printed images and image printing on MRTD data pages specified in ICAO Doc 9303 apply.
在採用印刷人像照片的護照申請流程中,申請人通常需前往照相館或自助拍照亭拍攝人像照片。無論何種情況,申請人取得的是實體照片,並無電子提交或連結至電子儲存影像的選項。隨後,申請人將此照片作為申請文件的一部分提交給護照簽發機關。若要建立使用印刷人像照片的護照申請流程,需遵循 D.1.4 與 D.1.5 條款。另須符合國際民航組織 9303 號文件所規範之提交用影像印刷、印刷影像掃描,以及機讀旅行證件資料頁影像印刷等附加要求。

D.1.3.3 Passport application using electronic face portrait submission
D.1.3.3 使用電子臉部肖像提交護照申請

Enrolment data providers, such as photographers, photo booths or kiosks can be linked electronically to the issuing authorities. For a passport application that uses electronic submission, the intermediate steps dealing with a printed photo are skipped. In most cases, the photo is digitally captured and electronically stored or directly transmitted to the passport issuing authority. There are many ways the face portrait may be transferred to the passport issuing authority. Such schemas include direct transmission to the authority, a data carrier submitted by the citizen, and temporary storage on a server and submission of a reference to the uploaded/stored photo provided by the citizen. Live capture where the applicant has the photo taken during an interview or application submission is covered here, too. To establish a passport application process using electronic face portraits, D.1.4 and D.1.5 apply. Additional requirements on image printing on MRTD data pages specified in ICAO Doc 9303 apply. The IED requirement for electronically submitted images follows the same requirement for chip images as in D.1.5 and Tables D. 4 and D.10.
諸如攝影師、快照亭或自助服務站等註冊資料提供者,可透過電子方式與發證機關連結。對於採用電子提交的護照申請,將跳過處理實體照片的中間步驟。在多數情況下,照片會以數位方式拍攝並電子儲存,或直接傳送至護照簽發機關。 人像照片傳送至護照簽發機關的方式有多種,包含直接傳輸至機關、由公民提交的資料載體、暫存於伺服器並提交公民上傳/儲存照片的參照等。此處亦涵蓋申請人在面談或提交申請時現場拍攝照片的情況。 若要建立採用電子人像照片的護照申請流程,應適用 D.1.4 與 D.1.5 條款。另須符合 ICAO Doc 9303 對 MRTD 資料頁面影像印刷的額外要求。電子提交影像的 IED 規範,須遵循與 D.1.5 條款及表 D.4、D.10 相同的晶片影像要求。

D.1.3.4 Non-professional photographs
D.1.3.4 非專業照片

Passport applicants should not be encouraged to submit face portraits captured by amateur photographers or captured on amateur equipment such as mobile phones or tablets or printed on consumer printers (home-made face portraits) as they typically do not achieve the required quality level as specified in D.1.4. If an issuer decides to accept homemade face portraits, the issuer shall
不應鼓勵護照申請者提交由業餘攝影師拍攝或使用業餘設備(如行動裝置或平板電腦)拍攝,或透過家用印表機列印的臉部肖像(自製臉部肖像),因為這些通常無法達到 D.1.4 條款所規定的品質要求。若發證機關決定接受自製臉部肖像,該機關應

ensure, based on an appropriate level of expertise, that the printing quality and all of the requirements specified in D.1.4 are maintained, and that the risks of photo manipulation and morphing inherent with such an uncontrolled process are suitably mitigated.
基於適當的專業水準,確保印刷品質及 D.1.4 條款中規定的所有要求均得以維持,並妥善降低此類不受控流程所固有的照片篡改與變造風險。

D.1.4 Enrolment live face portrait capturing
D.1.4 註冊現場臉部肖像拍攝

D.1.4.1 General  D.1.4.1 一般規定

This clause describes the requirements for the environment that is used for face portrait capturing. Additionally, it gives recommendations on best practice. The requirements on the environment are derived from experiences made in face recognition applications including ABC gates, and they consider the methods used by professional photographers.
本條款描述用於臉部肖像拍攝的環境要求。此外,亦提供最佳實務建議。環境要求源自臉部辨識應用(包括 ABC 閘門)的實務經驗,並參考專業攝影師所採用的方法。
Before introducing new equipment and defining processes for enrolment data capturing, an experienced face portrait photographer and/or an optics expert should be asked for advice. The requirements apply to all installations including photo booths and kiosks.
在導入新設備並定義註冊資料採集流程前,應諮詢經驗豐富的臉部肖像攝影師及/或光學專家意見。此要求適用於所有安裝環境,包含照相亭與自助服務機台。
This clause specifies requirements for the photograph being captured as well as for the photographic equipment being used. Figure D. 3 shows the content of D.1.4 in the MRTD production process chain.
本條款明確定義被拍攝照片及使用攝影設備的相關規範。圖 D.3 顯示 MRTD 生產流程鏈中 D.1.4 章節的內容範圍。

Figure D. 3 - Content of D.1.4 in the MRTD production process chain (boxed in red)
圖 D.3 - MRTD 生產流程鏈中 D.1.4 章節內容(紅框標示區域)
NOTE Pose constraints are very difficult to evaluate on the acquired 2D image, even for experts in this field. Numeric values have been provided in this document to support a consistent subject positioning in the fullfrontal pose.
請注意,姿勢限制在獲取的 2D 影像上非常難以評估,即使是該領域的專家也是如此。本文件提供了數值,以支援在全正面姿勢中保持一致的被攝者定位。

D.1.4.2 Camera and scene
D.1.4.2 相機與場景

D.1.4.2.1 Selection of camera and focal length
D.1.4.2.1 相機與焦距的選擇

In addition to choosing an appropriate camera-to-subject distance (CSD), as described in D.1.4.2.2, the selection of a camera and its lens is a major factor affecting the quality of face portrait images. To ensure
除了選擇適當的相機至被攝者距離(CSD)外(如 D.1.4.2.2 所述),相機及其鏡頭的選擇是影響人像影像品質的主要因素。為確保

high image quality and a standards-compliant inter-eye distance (IED), the camera’s sensor must have sufficient pixel dimensions and its lens must be chosen to match its image sensor’s physical dimensions.
高畫質影像與符合標準的瞳距(IED),相機感光元件必須具備足夠的像素尺寸,且鏡頭選擇需與感光元件的物理尺寸相匹配。
For example, for a camera using an APS-C sensor (having a crop factor of 1,44), photographers should consider using a lens of focal length between 50/1,44 and 130/1,44, or roughly 35 mm to 90 mm .
舉例來說,使用 APS-C 感光元件(裁切係數為 1.44)的相機,攝影師應考慮使用焦距介於 50/1.44 至 130/1.44 之間的鏡頭,約相當於 35mm 至 90mm 焦距。
For face portrait photos, photographers using a conventional 35 mm film camera (having a 36 mm × 24 mm 36 mm × 24 mm 36mmxx24mm36 \mathrm{~mm} \times 24 \mathrm{~mm} frame, with a 43 , 3 mm 43 , 3 mm 43,3mm43,3 \mathrm{~mm} diagonal) often select a normal to moderate telephoto lens, with a focal length between 50 and 130 mm (or an equivalent zoom lens). For digital cameras employing typically smaller size CMOS or CCD image sensors, the lens selected for face portrait photography should have a proportionally decreased focal length.
拍攝人像照片時,使用傳統 35mm 底片相機(具有 36 mm × 24 mm 36 mm × 24 mm 36mmxx24mm36 \mathrm{~mm} \times 24 \mathrm{~mm} 畫幅,對角線為 43 , 3 mm 43 , 3 mm 43,3mm43,3 \mathrm{~mm} )的攝影師通常會選擇標準至中長焦鏡頭,焦距範圍介於 50 至 130mm(或等效變焦鏡頭)。對於採用較小尺寸 CMOS 或 CCD 感光元件的數位相機,其人像攝影鏡頭的焦距應按比例縮減。
For further explanations on sensor diagonal and sensor diagonal encoding see 7.42 7.42 _ 7.42 _\underline{7.42} and 7.43 7.43 _ 7.43 _\underline{7.43}.
關於感光元件對角線與感光元件對角線編碼的進一步說明,請參閱 7.42 7.42 _ 7.42 _\underline{7.42} 7.43 7.43 _ 7.43 _\underline{7.43}

Figure D. 4 illustrates the typical optical arrangement and terminology for face portrait image acquisition, as well as some of the variables in the arrangement.
圖 D.4 展示了臉部肖像影像擷取的典型光學配置與術語,以及該配置中的部分變數。

Figure D.4 - Illustration of optical arrangement and terminology
圖 D.4 - 光學配置與術語示意圖
For a selected CSD (in millimetres), and camera image sensor with a vertical dimension in millimetres h h hh, a requested field of view of H FieldofViewmm H FieldofViewmm  H_("FieldofViewmm ")H_{\text {FieldofViewmm }}, the focal length f (in millimetres) can be computed using the following relationships in order to optimise the requested field of view of the subject into the sensor dimensions:
針對選定的 CSD(以毫米為單位),以及垂直尺寸為 h h hh 毫米的相機影像感測器,若要求 H FieldofViewmm H FieldofViewmm  H_("FieldofViewmm ")H_{\text {FieldofViewmm }} 的視野範圍,可透過下列關係式計算焦距 f(以毫米為單位),以將拍攝主體的視野範圍最佳化至感測器尺寸:
f h m m C S D H FieldOfViewmm f h m m C S D H FieldOfViewmm  f~=h_(mm)(CSD)/(H_("FieldOfViewmm "))f \cong h_{m m} \frac{C S D}{H_{\text {FieldOfViewmm }}}
In case of homemade face portraits, the lens optimization is not done due to the large camera angle.
在自製臉部肖像的情況下,由於相機角度過大,故不進行鏡頭最佳化處理。
For the same camera image sensor with vertical pixel count of h h hh pixels, the inter-eye distance on the sensor in pixels I E D p x Sensor I E D p x Sensor  IED_(px)^("Sensor ")I E D_{p x}^{\text {Sensor }} may be computed using the following relation-ships, where I E D m m Subject I E D m m Subject  IED_(mm)^("Subject ")I E D_{m m}^{\text {Subject }} is intereye distance in millimetres on the subject.
對於相同垂直像素數為 h h hh 像素的相機影像感測器,感測器上的瞳孔間距(以像素為單位 I E D p x Sensor I E D p x Sensor  IED_(px)^("Sensor ")I E D_{p x}^{\text {Sensor }} )可透過以下關係式計算,其中 I E D m m Subject I E D m m Subject  IED_(mm)^("Subject ")I E D_{m m}^{\text {Subject }} 為被攝體上以毫米為單位的瞳孔間距。
I E D m m Sensor I E D m m Subject × f C S D I E D m m Sensor  I E D m m Subject  × f C S D IED_(mm)^("Sensor ")~=IED_(mm)^("Subject ")xx(f)/(CSD)I E D_{m m}^{\text {Sensor }} \cong I E D_{m m}^{\text {Subject }} \times \frac{f}{C S D}
and  以及
I E D p x Sensor = I E D m m Sensor × h p x h m m I E D p x Sensor  = I E D m m Sensor  × h p x h m m IED_(px)^("Sensor ")=IED_(mm)^("Sensor ")xx(h_(px))/(h_(mm))I E D_{p x}^{\text {Sensor }}=I E D_{m m}^{\text {Sensor }} \times \frac{h_{p x}}{h_{m m}}
EXAMPLE 1 A commercially available, digital single lens reflex (DSLR) camera has the following specifications: sensor APS-C, 22 , 3 mm × 14 , 9 mm , 5184 px × 3456 px , 18 22 , 3 mm × 14 , 9 mm , 5184 px × 3456 px , 18 22,3mmxx14,9mm,5184pxxx3456px,1822,3 \mathrm{~mm} \times 14,9 \mathrm{~mm}, 5184 \mathrm{px} \times 3456 \mathrm{px}, 18 megapixels. For a CSD of 1200 mm , a typical H FieldofViewmm H FieldofViewmm  H_("FieldofViewmm ")H_{\text {FieldofViewmm }} of 500 mm , a typical IED of the subject of about 62 mm the calculations below show that the focal length f f ff will be about 50 mm (equivalent to 80 mm full frame).
範例 1 一款市售數位單眼相機(DSLR)具有以下規格:APS-C 感測器、 22 , 3 mm × 14 , 9 mm , 5184 px × 3456 px , 18 22 , 3 mm × 14 , 9 mm , 5184 px × 3456 px , 18 22,3mmxx14,9mm,5184pxxx3456px,1822,3 \mathrm{~mm} \times 14,9 \mathrm{~mm}, 5184 \mathrm{px} \times 3456 \mathrm{px}, 18 百萬像素。當拍攝距離(CSD)為 1200 毫米、典型 H FieldofViewmm H FieldofViewmm  H_("FieldofViewmm ")H_{\text {FieldofViewmm }} 為 500 毫米、被攝體典型瞳孔間距(IED)約 62 毫米時,下列計算顯示焦距 f f ff 約為 50 毫米(相當於全片幅 80 毫米)。
f 22 , 3 mm × 1200 mm 500 mm 53 , 5 mm 50 mm f 22 , 3 mm × 1200 mm 500 mm 53 , 5 mm 50 mm f~=22,3mmxx(1200(mm))/(500(mm))~=53,5mm~=50mmf \cong 22,3 \mathrm{~mm} \times \frac{1200 \mathrm{~mm}}{500 \mathrm{~mm}} \cong 53,5 \mathrm{~mm} \cong 50 \mathrm{~mm}
The calculations below show that I E D p x Sensor I E D p x Sensor  IED_(px)^("Sensor ")I E D_{p x}^{\text {Sensor }} will be about 598 pixels, well above the best practice value suggested in Table D.4.
下列計算顯示 I E D p x Sensor I E D p x Sensor  IED_(px)^("Sensor ")I E D_{p x}^{\text {Sensor }} 約為 598 像素,遠高於表 D.4 建議的最佳實務值。
I E D m m Sensor 62 mm × 50 mm 1200 mm 2 , 58 mm I E D m m Sensor  62 mm × 50 mm 1200 mm 2 , 58 mm IED_(mm)^("Sensor ")~=62mmxx(50(mm))/(1200(mm))~=2,58mmI E D_{m m}^{\text {Sensor }} \cong 62 \mathrm{~mm} \times \frac{50 \mathrm{~mm}}{1200 \mathrm{~mm}} \cong 2,58 \mathrm{~mm}
and  以及
I E D p x Sensor 2 , 58 mm × 3456 px 14 , 9 mm = 598 pixels I E D p x Sensor  2 , 58 mm × 3456 px 14 , 9 mm = 598  pixels  IED_(px)^("Sensor ")~=2,58mmxx(3456px)/(14,9(mm))=598" pixels "I E D_{p x}^{\text {Sensor }} \cong 2,58 \mathrm{~mm} \times \frac{3456 \mathrm{px}}{14,9 \mathrm{~mm}}=598 \text { pixels }
EXAMPLE 2 For a sensor of 5 megapixels ( 2592 px × 1944 px 2592 px × 1944 px 2592pxxx1944px2592 \mathrm{px} \times 1944 \mathrm{px} ) with an optimized focal length lens, the I E D p x Sensor I E D p x Sensor  IED_(px)^("Sensor ")I E D_{p x}^{\text {Sensor }} will be about 336 pixels, well above the best practice value suggested in Table D.4:
範例 2 對於配備優化焦距鏡頭的 500 萬像素感測器( 2592 px × 1944 px 2592 px × 1944 px 2592pxxx1944px2592 \mathrm{px} \times 1944 \mathrm{px} ),其 I E D p x Sensor I E D p x Sensor  IED_(px)^("Sensor ")I E D_{p x}^{\text {Sensor }} 約為 336 像素,遠高於表 D.4 建議的最佳實務值:
I E D p x Sensor 2 , 58 mm × 1944 px 14 , 9 mm = 336 pixels I E D p x Sensor  2 , 58 mm × 1944 px 14 , 9 mm = 336  pixels  IED_(px)^("Sensor ")~=2,58mmxx(1944px)/(14,9(mm))=336" pixels "I E D_{p x}^{\text {Sensor }} \cong 2,58 \mathrm{~mm} \times \frac{1944 \mathrm{px}}{14,9 \mathrm{~mm}}=336 \text { pixels }

D.1.4.2.2 Magnification distortion and camera subject distance
D.1.4.2.2 放大失真與攝影主體距離

All images captured by a photographic system will contain image distortion. Every face portrait is a compromise between different requirements like camera and lens costs or available space and illumination. In this document requirements and recommendations are given to ensure global interoperability in the sense that the most important properties of every face portrait used for MRTD purposes reach the correct quality requirements and therefore ensure similar performance in face image-based authentication applications like border control systems.
所有攝影系統拍攝的影像皆會包含影像失真。每張人像照片都是不同需求間的折衷方案,例如相機與鏡頭成本、可用空間及照明條件等。本文件提供相關規範與建議,旨在確保全球互通性,使每張用於機讀旅行證件的人像照片之關鍵特性皆能達到正確的品質要求,進而確保在邊境管制系統等基於人像影像的驗證應用中具有一致的表現效能。
The CSD requirements are listed in Table D.2. For sample face portraits illustrating possible effects of the optical system see Figures D. 6 and D.7. Table D. 4 lists different camera subject distances and their corresponding magnification distortions.
CSD 要求列於表 D.2。關於光學系統可能產生效果的臉部肖像範例,請參閱圖 D.6 和 D.7。表 D.4 列出了不同相機拍攝距離及其對應的放大失真情況。
Magnification distortion can only be evaluated by measuring tools. See E.2. It is not possible to evaluate magnification distortion value from human vision. For information, ears start to be masked around a magnification distortion of 14 % 14 % 14%14 \% or higher.
放大失真只能透過測量工具進行評估。詳見 E.2 節。無法透過人眼視覺評估放大失真值。作為參考,當放大失真達到 14 % 14 % 14%14 \% 或更高時,耳朵部分會開始被遮蔽。
NOTE 1 Selfie-style face portraits are likely not to maintain the minimal distance requirement.
註 1 自拍式人像照片可能無法保持最小距離要求。
Table D. 2 - CSD requirements and recommendations
表 D.2 - CSD 要求與建議
Criterion: CSD for 1:1
準則:1:1 比例的 CSD
Requirement  要求 0 , 7 m CSD 4 m 0 , 7 m CSD 4 m 0,7m <= CSD <= 4m0,7 \mathrm{~m} \leq \mathrm{CSD} \leq 4 \mathrm{~m}
Best practice  最佳實務 1 , 0 m CSD 2 , 5 m 1 , 0 m CSD 2 , 5 m 1,0m <= CSD <= 2,5m1,0 \mathrm{~m} \leq \mathrm{CSD} \leq 2,5 \mathrm{~m}
Criterion: CSD for 1:N
準則:1:N 的 CSD
Requirement  要求 1 m CSD 4 m 1 m CSD 4 m 1m <= CSD <= 4m1 \mathrm{~m} \leq \mathrm{CSD} \leq 4 \mathrm{~m}
Best practice  最佳實務 1 , 2 m C S D 2 , 5 m 1 , 2 m C S D 2 , 5 m 1,2m <= CSD <= 2,5m1,2 \mathrm{~m} \leq C S D \leq 2,5 \mathrm{~m}
Criterion: CSD for 1:1 Requirement 0,7m <= CSD <= 4m Best practice 1,0m <= CSD <= 2,5m Criterion: CSD for 1:N Requirement 1m <= CSD <= 4m Best practice 1,2m <= CSD <= 2,5m| Criterion: CSD for 1:1 | Requirement | $0,7 \mathrm{~m} \leq \mathrm{CSD} \leq 4 \mathrm{~m}$ | | :--- | :--- | :--- | | | Best practice | $1,0 \mathrm{~m} \leq \mathrm{CSD} \leq 2,5 \mathrm{~m}$ | | Criterion: CSD for 1:N | Requirement | $1 \mathrm{~m} \leq \mathrm{CSD} \leq 4 \mathrm{~m}$ | | | Best practice | $1,2 \mathrm{~m} \leq C S D \leq 2,5 \mathrm{~m}$ |
The camera shall be at the subject’s eye-level. The line between camera and centre of subject’s face shall be horizontal within a maximum HD of ± 5 ± 5 +-5^(@)\pm 5^{\circ}. Height alignment should be done by vertical adjustment of either subject or camera. See Figure D.5.
攝影機應與受測者眼睛處於同一水平高度。攝影機與受測者臉部中心之間的連線應保持水平,最大水平偏差不得超過 ± 5 ± 5 +-5^(@)\pm 5^{\circ} 。高度對齊應透過調整受測者或攝影機的垂直位置來完成。詳見圖 D.5。

Figure D. 5 - Alignment of camera and subject
圖 D.5 - 攝影機與受測者對齊示意圖
These recommendations and requirements apply for all capturing setups including photo booths and kiosks.
這些建議與要求適用於所有採集設備配置,包含照相亭與自助服務機台。


Figure D. 6 - Appearance with and without strong magnification distortion
圖 D.6 - 有無強烈放大變形之外觀比較
One of the important factors that influence the appearance of the facial features is the distance between subject and camera lens.
影響臉部特徵外觀的重要因素之一,是被攝主體與攝影鏡頭之間的距離。
The magnification distortion due to camera subject distance can be noticeable to human examiners but shall be within defined limits that allow effective face recognition.
由攝影距離造成的放大變形,雖然可能被人眼檢視者察覺,但仍應控制在允許有效臉部辨識的定義範圍內。
Acceptable distortion rate tolerances depend on the performance capacity of state of the art face recognition technology, and on the capability of typical human inspection staff to recognize people, even those coming from varying ethnic origin.
可接受的變形率容許範圍,取決於當代先進臉部辨識技術的效能,以及一般人工檢視人員辨識人員的能力,包括來自不同種族背景的對象。


a) Distance of 3 0 0 m m 3 0 0 m m 300mm\mathbf{3 0 0 ~ m m}
a) 3 0 0 m m 3 0 0 m m 300mm\mathbf{3 0 0 ~ m m} 距離

b) Distance of 4 0 0 m m 4 0 0 m m 400mm\mathbf{4 0 0 ~ m m}
b) 4 0 0 m m 4 0 0 m m 400mm\mathbf{4 0 0 ~ m m} 距離

c) Distance of 6 0 0 m m 6 0 0 m m 600mm\mathbf{6 0 0 ~ m m}
c) 6 0 0 m m 6 0 0 m m 600mm\mathbf{6 0 0 ~ m m} 距離

Figure D. 7 - Sample face portraits taken with a full-size sensor camera at focal length 50 mm from different distances
圖 D.7 - 使用全片幅感光元件相機搭配 50mm 焦距鏡頭,從不同距離拍攝的人像樣本
These images have been captured using the enrolment bench described in E.6. All images have been normalized to a constant IED. The red bars mark the distance between the feature points 10.7 and 10.8 according to ISO/IEC 14496-2:2004, Annex C measured in Figure D. 7 i).
這些影像皆使用附錄 E.6 所述之註冊台架拍攝完成。所有影像均已標準化為固定瞳距。紅色標線標示依據 ISO/IEC 14496-2:2004 附錄 C 所測量之特徵點 10.7 與 10.8 間距,詳見圖 D.7 i)。
Rulers at nose and ear may be used to measure the geometric effect to the face, i.e., a millimetre at nose level is larger than a millimetre at ear level on the image of the rulers.
可利用鼻部與耳部之比例尺來測量臉部幾何效應,亦即影像中鼻端位置 1 公厘之實際尺寸會大於耳部位置 1 公厘之呈現比例。
The maximum level of magnification distortion of the capturing process shall be set depending on the appropriate application case:
擷取過程之最大放大失真率應根據適用情境設定:
  • 1:1 application case: At the border, an automatic and/or human face verification/comparison is progressed. This is the case in most automated border control applications. The maximum magnification distortion rate of the picture in the passport shall not be greater than 7 % 7 % 7%7 \% and ideally should not be greater than 5 %.
    1:1 應用情境:於邊境進行自動化及/或人工臉部驗證/比對程序。此為多數自動化邊境管制應用之標準情境。護照照片之最大放大失真率不得超過 7 % 7 % 7%7 \% ,理想情況下應控制在 5%以內。
  • 1:N application case: At the enrolment or issuance time of the document, a 1:N face identification is done on a database to help verifying the uniqueness of the identity associated to the new image provided. N is as large as the number of images searched. This application case requires higher quality enrolment. The maximum magnification distortion rate shall not be greater than 5 % 5 % 5%5 \% and ideally should not be greater than 4 %.
    1:N 應用案例:在文件註冊或發行時,會對資料庫進行 1:N 的人臉識別,以協助驗證與新提供圖像相關聯的身份唯一性。N 值等同於搜尋圖像的數量。此應用案例需要更高品質的註冊程序。最大放大失真率不得超過 5 % 5 % 5%5 \% ,理想情況下不應超過 4%。
The study presented in E. 6 has shown that, for a large range of enrolment and verification distances the influence of magnification distortion on automatic face recognition system performance is low.
E.6 章節所呈現的研究顯示,在廣泛的註冊與驗證距離範圍內,放大失真對自動人臉識別系統性能的影響程度較低。
The magnification distortion is considered to be noticeable if the distance between units on a ruler at the nose tip level measured in pixels is more than 5 % 5 % 5%5 \% larger than the distance between units on a ruler at the outer canthus level measured in pixels. The elevation of the nose compared to the outer canthus of the test subject is assumed to be 50 mm . It is sufficient to measure this properly once whenever a photographic setup is introduced or modified. An example photo is given in Figure D.8. Examples of face portraits with good appearance and too strong magnification distortion are given in Figure D.7. The general case of the optical system is discussed in E.2.
若以像素為單位測量時,鼻尖高度處標尺單位間距比外眼角高度處標尺單位間距大於 5 % 5 % 5%5 \% ,則視為存在明顯的放大畸變。假設受測者鼻部相對於外眼角的隆起高度為 50 毫米。每當攝影設置被引入或修改時,僅需正確測量一次即可。圖 D.8 提供範例照片,圖 D.7 則展示外觀良好與放大畸變過強的人像對照範例。光學系統的通用情況將於 E.2 節討論。

Figure D. 8 - Magnification distortion measurement with rulers at nose and eye level
圖 D.8 - 使用鼻部與眼部高度標尺進行放大畸變測量
There are several possible strategies for decreasing the magnification distortion. The general assessment of an optical system is discussed in E.2. Assuming a telecentric lens, the distance between sensor and subject does not introduce any magnification distortion. Real systems need specific considerations and measurements like those described in E.2. Another strategy to decrease the magnification distortion is to increase the distance between subject and camera, or to fold the optical path. The principle of a folded optical path is illustrated in Figure D.9. These strategies are not limitative. Camera subject distance and corresponding magnification distortion examples are listed in Table D.3. Sample images taken with a high quality camera with several magnification distortion rates are given in Figure D.7.
有數種可能策略可降低放大倍率失真。光學系統的整體評估於 E.2 節討論。假設使用遠心鏡頭,感測器與被攝物體間的距離不會產生任何放大倍率失真。實際系統需進行如 E.2 節所述的特定考量與量測。另一種降低放大倍率失真的策略是增加被攝物體與相機間的距離,或採用光路折返設計。圖 D.9 展示了光路折返的原理示意圖。這些策略並非限制性方案。相機與被攝物體距離及對應放大倍率失真範例列於表 D.3。圖 D.7 則提供使用高品質相機拍攝、具備不同放大倍率失真程度的樣本影像。

Figure D. 9 - Principle sketch of a folded optical path
圖 D.9 - 光路折返原理示意圖
Table D. 3 - Camera subject distance and corresponding magnification distortion
表 D.3 - 相機與被攝物體距離及對應放大倍率失真
Camera subject distance in mm
相機與被攝物體距離(單位:毫米)
Magnification distortion Δ d / d Δ d / d Deltad//d\Delta \mathrm{d} / \mathrm{d} for a standard (i.e. not telecentric) lens
標準鏡頭(即非遠心鏡頭)的放大畸變 Δ d / d Δ d / d Deltad//d\Delta \mathrm{d} / \mathrm{d}
300 16,7 %
400 12,5 %
500 10,0 %
600 8,3 %
700 7,1 %
1000 5,0 %
1200 4,2 %
1500 3,3 %
2000 2,5 %
2500 2,0 %
3000 1,7 %
Camera subject distance in mm Magnification distortion Deltad//d for a standard (i.e. not telecentric) lens 300 16,7 % 400 12,5 % 500 10,0 % 600 8,3 % 700 7,1 % 1000 5,0 % 1200 4,2 % 1500 3,3 % 2000 2,5 % 2500 2,0 % 3000 1,7 %| Camera subject distance in mm | Magnification distortion $\Delta \mathrm{d} / \mathrm{d}$ for a standard (i.e. not telecentric) lens | | :--- | :--- | | 300 | 16,7 % | | 400 | 12,5 % | | 500 | 10,0 % | | 600 | 8,3 % | | 700 | 7,1 % | | 1000 | 5,0 % | | 1200 | 4,2 % | | 1500 | 3,3 % | | 2000 | 2,5 % | | 2500 | 2,0 % | | 3000 | 1,7 % |
NOTE 2 This magnification distortion only applies for standard (i. e. non-telecentric lens) lenses.
註 2 此放大畸變僅適用於標準鏡頭(即非遠心鏡頭)。

Homemade face portraits are also affected by magnification distortion tolerance. Issuers who accept homemade face portraits should be aware that there is no scientific solution that allows checking the tolerance compliancy. Therefore, the acceptance of homemade face portrait is not recommended.
自製人像照片同樣會受到放大變形容許度的影響。接受自製人像照片的發證單位應注意,目前尚無科學方法可驗證其是否符合容許度標準。因此,不建議接受自製人像照片。
The issuer should allow for a transition period in which enrolment systems, e.g., photo booths and kiosks may be updated to fulfil the magnification distortion requirements considering economic and feasibility reasons. The duration of such transition period is at the discretion of the issuer.
考量經濟與可行性因素,發證單位應設定過渡期,讓註冊系統(例如照相亭與自助服務機)能進行更新以符合放大變形要求。此過渡期的長短由發證單位自行裁量。

D.1.4.2.3 Radial distortion
D.1.4.2.3 徑向失真

The radial distortion due to lens properties can be noticeable to human examiners but shall be within defined limits that allow effective face recognition. In particular, fish eye effects caused by wide angle lenses combined with camera placement too close to the face shall not be present.
由於鏡頭特性造成的徑向畸變可能被人眼察覺,但應控制在允許有效人臉辨識的定義範圍內。特別是廣角鏡頭搭配過近拍攝距離所產生的魚眼效應必須避免。
Acceptable distortion rate tolerances depend on the performance capacity of state of the art face recognition technology, and on the capability of typical human inspection staff to recognize people, even those coming from varying ethnic origin.
可接受的畸變率容許值取決於現行人臉辨識技術的性能表現,以及一般檢驗人員辨識不同種族人員的能力。
If the radial distortion is less than 2 % 2 % 2%2 \%, the human eye will not easily perceive it. It is recommended that radial distortion is less than 2 , 5 % 2 , 5 % 2,5%2,5 \%.
若徑向畸變低於 2 % 2 % 2%2 \% ,人眼將不易察覺。建議徑向畸變應控制在 2 , 5 % 2 , 5 % 2,5%2,5 \% 以下。
The general assessment of an optical system is discussed in E.2.
光學系統的整體評估方法詳見 E.2 節。

D.1.4.2.4 Pixel count, focus and depth of field
D.1.4.2.4 像素數量、對焦與景深

Digital cameras used to capture face portraits shall produce images where the vertical and horizontal pixel density is the same.
用於拍攝人像的數位相機,其產生的影像垂直與水平像素密度應相同。
Live captured face portraits of a subject:
即時拍攝的人像照片:
  • Shall be captured in one of the following formats: PNG, JPEG, JPEG2000, RAW formats supported by the camera, lossless formats should be preferred,
    應以以下其中一種格式拍攝:PNG、JPEG、JPEG2000、相機支援的 RAW 格式,建議優先使用無損格式,
  • Should be captured at a minimum dimension of 1200 pixels width × 1600 × 1600 xx1600\times 1600 pixels height (cropped image),
    最小應以 1200 像素寬度 × 1600 × 1600 xx1600\times 1600 像素高度(裁切後影像)擷取
  • Shall be captured in colour.
    應以彩色方式擷取
One of the four possible encodings shall be used:
應使用以下四種可能的編碼方式之一:
  • The JPEG sequential baseline (ISO/IEC 10918-1) mode of operation and encoded in the JFIF file format (the JPEG file format).
    採用 JPEG 循序基線(ISO/IEC 10918-1)操作模式,並以 JFIF 檔案格式(JPEG 檔案格式)進行編碼
  • The JPEG-2000 Part-1 code stream format (ISO/IEC 15444-1), lossy, and encoded in the JP2 file format (the JPEG2000 file format).
    JPEG-2000 第 1 部分編碼串流格式(ISO/IEC 15444-1),採用有損壓縮,並以 JP2 檔案格式(JPEG2000 檔案格式)編碼。
  • The JPEG-2000 Part-1 code stream format (ISO/IEC 15444-1), lossless, and encoded in the JP2 file format (the JPEG2000 file format).
    JPEG-2000 第 1 部分編碼串流格式(ISO/IEC 15444-1),採用無損壓縮,並以 JP2 檔案格式(JPEG2000 檔案格式)編碼。
  • The PNG specification (ISO/IEC 15948). PNG shall not be used in its interlaced mode and not for images that have been JPEG compressed before.
    PNG 規格(ISO/IEC 15948)。PNG 不得使用交錯模式,且不得用於先前經過 JPEG 壓縮的影像。
For the use of RAW images see 7.40. The encoding into one of the four formats above can be done in a later process step before MRTD production.
關於 RAW 影像的使用,請參閱第 7.40 節。在製作 MRTD 之前,可於後續處理步驟中將影像編碼為上述四種格式之一。
The IED in the captured photo shall be at least 90 pixels for legacy applications. If an issuer considers the design of a new passport application process, the new IED should be at least 240 pixels. Examples for a new process could be live capturing, digital submission without analogue intermediate steps, or increasing the size of the printed photograph to be scanned, see Table D.4. See Figure 10 for an illustration of the IED measurement.
所拍攝照片中的 IED(圖像元素直徑)對於傳統應用程式應至少為 90 像素。若發證機關考慮設計新的護照申請流程,則新的 IED 應至少為 240 像素。新流程的範例可能包括即時拍攝、無需類比中介步驟的數位提交,或增加待掃描印刷照片的尺寸,詳見表 D.4。IED 測量方式示意圖請參閱圖 10。
Table D.4 - IED capturing requirements and recommendations.
表 D.4 - IED 拍攝要求與建議
Criterion: live capture IED
準則:即時拍攝 IED
Requirement  要求 IED 90 90 >= 90\geq 90 pixels  IED 90 90 >= 90\geq 90 像素
Best practice  最佳實務 IED 240 240 >= 240\geq 240 pixels  IED 240 240 >= 240\geq 240 像素
Criterion: scanned image IED
標準:掃描影像 IED
Requirement  需求 IED 90 90 >= 90\geq 90 pixels  IED 90 90 >= 90\geq 90 像素
Best practice  最佳實務 IED 240 240 >= 240\geq 240 pixels  IED 240 240 >= 240\geq 240 像素
Criterion: electronic submission IED
準則:電子提交 IED
Requirement  要求 IED 90 90 >= 90\geq 90 pixels  IED 90 90 >= 90\geq 90 像素
Best practice  最佳實務 IED 240 240 >= 240\geq 240 pixels  IED 240 240 >= 240\geq 240 像素
Criterion: issuer repository IED
準則:發行者程式碼儲存庫 IED
Requirement  需求 IED 90 90 >= 90\geq 90 pixels  IED 90 90 >= 90\geq 90 像素
Best practice  最佳實務 IED 240 240 >= 240\geq 240 pixels  IED 240 240 >= 240\geq 240 像素
Criterion: MRTD chip storage IED
準則:MRTD 晶片儲存 IED
Requirement  需求 IED 90 90 >= 90\geq 90 pixels  IED 90 90 >= 90\geq 90 像素
Best practice  最佳實務 IED 120 120 >= 120\geq 120 pixels  IED 120 120 >= 120\geq 120 像素
Criterion: live capture IED Requirement IED >= 90 pixels Best practice IED >= 240 pixels Criterion: scanned image IED Requirement IED >= 90 pixels Best practice IED >= 240 pixels Criterion: electronic submission IED Requirement IED >= 90 pixels Best practice IED >= 240 pixels Criterion: issuer repository IED Requirement IED >= 90 pixels Best practice IED >= 240 pixels Criterion: MRTD chip storage IED Requirement IED >= 90 pixels Best practice IED >= 120 pixels| Criterion: live capture IED | Requirement | IED $\geq 90$ pixels | | :--- | :--- | :--- | | | Best practice | IED $\geq 240$ pixels | | Criterion: scanned image IED | Requirement | IED $\geq 90$ pixels | | | Best practice | IED $\geq 240$ pixels | | Criterion: electronic submission IED | Requirement | IED $\geq 90$ pixels | | | Best practice | IED $\geq 240$ pixels | | Criterion: issuer repository IED | Requirement | IED $\geq 90$ pixels | | | Best practice | IED $\geq 240$ pixels | | Criterion: MRTD chip storage IED | Requirement | IED $\geq 90$ pixels | | | Best practice | IED $\geq 120$ pixels |
NOTE This pixel count is specified for the live captured face portrait only. For stored images on the chip, see D.1.5 and Table D. 10 as well.
註記 此像素數僅適用於即時擷取的人臉肖像。至於晶片上儲存的影像,請另參閱 D.1.5 節及表 D.10。
All images shall have sufficient focus and depth of field to maintain the required level of details. The camera shall be capable of accurately rendering fine contrasted face details, such as wrinkles and moles, as small as 1 mm in diameter on the face.
所有影像應具備足夠的對焦與景深,以維持所需細節層級。相機須能精確呈現臉部細微對比特徵,例如直徑小至 1 公分的皺紋與痣點。
The focus and depth of field of the camera shall be set so that the subject’s captured image is in focus from nose to ears. In most cases, a depth of field of 150 mm will be sufficient. See Table D.5. The background behind the subject may be out of focus. Proper focus and depth-of-field will be assured by either using the camera auto focus function with manual aperture settings or by pre-focusing the lens at the distance of the subject’s eyes and by selecting an appropriate aperture ( F -stop) to ensure a depth-of-field of the distance from a subject’s nose to ears. See E.5.
相機的對焦與景深應調整至能清晰捕捉被攝者從鼻子到耳朵的影像。在大多數情況下,150 毫米的景深範圍即已足夠。詳見表 D.5。被攝者後方的背景允許呈現失焦狀態。可透過以下方式確保正確對焦與景深:使用相機自動對焦功能並手動設定光圈,或預先將鏡頭對焦於被攝者雙眼距離,並選擇適當光圈值(F-stop)以確保景深範圍能涵蓋從鼻子到耳朵的距離。參見 E.5 章節。
Table D. 5 - Depth of field requirements and recommendations
表 D.5 - 景深要求與建議規範

準則:景深範圍
Criterion:
Depth of field
Criterion: Depth of field| Criterion: | | :--- | | Depth of field |
Requirement  要求事項 Nose to ears  鼻尖至耳朵
Best practice  最佳實務 150 mm from nose level
距離鼻尖水平 150 毫米
"Criterion: Depth of field" Requirement Nose to ears Best practice 150 mm from nose level| Criterion: <br> Depth of field | Requirement | Nose to ears | | :--- | :--- | :--- | | | Best practice | 150 mm from nose level |
EXAMPLE An aperture of f / 8 f / 8 f//8\mathrm{f} / 8 for an 80 mm lens at a distance of 2500 mm provides a depth of field of 150 mm . An aperture of f / 16 f / 16 f//16\mathrm{f} / 16 for a 50 mm lens at a distance of 1200 mm provides a depth of field of about 180 mm .
範例 80 毫米鏡頭在 2500 毫米距離下使用 f / 8 f / 8 f//8\mathrm{f} / 8 光圈可獲得 150 毫米景深。50 毫米鏡頭在 1200 毫米距離下使用 f / 16 f / 16 f//16\mathrm{f} / 16 光圈則可獲得約 180 毫米景深。
A simplified visual compliance check method requires that the individual millimetre markings of rulers placed on the subject’s nose and ear facing the camera can be seen simultaneously in a captured test image. See Figure D.10. This quality assurance method should be used for quality assurance field checks from time to time. A more systematic test method is described in E.2.
簡化的視覺合規檢查方法要求,在拍攝的測試影像中,必須能同時看到置於受測者鼻子和耳朵面向相機側的尺規毫米刻度。詳見圖 D.10。此品質保證方法應定期用於現場品質保證檢查。更系統化的測試方法詳見 E.2 節說明。

Figure D. 10 - Examples for sharpness at nose and ear level
圖 D.10 - 鼻子與耳朵位置清晰度範例

D.1.4.2.5 Background  D.1.4.2.5 背景規範

The background surface behind the subject shall be plain, and shall have no texture containing spots, lines or curves that will be visible in the captured image. The background shall have a uniform colour. There may be gradual changes from light to dark luminosity in a single direction, although this may make it more difficult to remove the background during the document production process.
受測者後方的背景表面應為素面,不得含有斑點、線條或曲線等會在拍攝影像中顯現的紋理。背景顏色必須均勻。允許單一方向存在由亮至暗的漸變光度,但這可能增加文件製作過程中去除背景的難度。
A typical background for the scene is grey with a plain, dull flat surface. Plain light-coloured backgrounds such as light blue or white may be used as long as there is sufficient distinction between the face/hair area and the background. Camera colour settings should not be shifted depending on the background colour, see Figure D.11.
典型的場景背景應為灰色,並具有單調、無光澤的平面表面。只要臉部/頭髮區域與背景之間有足夠的區別,也可以使用淺藍色或白色等淺色背景。相機的色彩設定不應因背景顏色而改變,詳見圖 D.11。

Figure D. 11 - Examples of compliant portrait backgrounds
圖 D.11 - 符合規範的肖像背景範例
The boundary between the head and the background should be clearly identifiable around the entire subject with the exception of very large hair volume. See Figure D.12. A boundary that is not clearly visible can have a negative impact on the production process which often requires background removal.
頭部與背景之間的邊界應在整個主體周圍清晰可辨,但髮量極大的情況除外。詳見圖 D.12。若邊界不夠清晰可見,可能會對需要去除背景的製作流程產生負面影響。

Figure D. 12 - Contrast examples
圖 D.12 - 對比範例
Shadows should not be visible on the background behind the face image. In particular, there shall not be asymmetric shadows. There shall not be any objects visible in the background like supporting persons, chair backs, furniture, carpets, patterned wall papers or plants. For examples, see Figure D.13.
臉部影像的背景不應出現陰影,尤其不得有不對稱的陰影。背景中不得出現任何可見物體,例如支撐人員、椅背、家具、地毯、花紋壁紙或植物。範例請參見圖 D.13。

Figure D. 13 - Examples for non-compliant backgrounds
圖 D.13 - 不合格背景範例

D.1.4.2.6 Lighting  D.1.4.2.6 照明

Face portraits shall have adequate and uniform illumination. Lighting shall be equally distributed on the face, in particular symmetrically, i.e., there is no difference between the brightness of the right and left side of the face. There shall not be significant direction of the light visible from the point of view of the camera.
臉部肖像應具備充足且均勻的照明。光線應均勻分佈於臉部,特別是對稱分佈,即臉部左右兩側的亮度不得有差異。從相機視角不得出現明顯的光源方向。
The measured EV at four spots on a subject’s face; the left and right cheeks, forehead, and chin, should be the same. An EV difference of at most one F-stop or one shutter speed step is acceptable. If one or some of these four spots are covered by hair, e.g., the forehead by the hairstyle or the chin by a beard, these spots can’t be evaluated. The appropriate illumination setup of the scene should be verified from time to time. The subject being used for these tests should not have a hairstyle covering the forehead or the cheeks, or a beard.
受測者臉部四個位置的測量 EV 值(左右臉頰、額頭與下巴)應保持一致。允許最多相差 1 級光圈值或 1 級快門速度。若其中一處或多處被頭髮遮蓋(例如髮型遮蓋額頭或鬍鬚遮蓋下巴),則該部位將無法進行評估。應定期驗證場景的適當照明配置。用於此類測試的受測者,其髮型不應遮蓋額頭或臉頰,且不得蓄鬍。
The uniformity measurement should be done as specified below. It is not intended to be used for every single image. See Figure D. 14 for a visualization of that measurement. Automated quality assurance software, e.g., for registration offices or photo kiosks, should be implemented accordingly. However, such software should also consider exceptions due to hair on the forehead, beard, face anomalies and the like.
均勻度測量應按以下規範執行,此方法並非用於每張單獨影像。相關測量示意圖請參閱圖 D.14。自動化品質檢測軟體(例如用於戶政機關或快照亭的系統)應據此實作。惟該類軟體亦須考量因額前髮絲、鬍鬚、臉部異常等情況產生的例外狀況。
  1. Determine the four measurement zones on the forehead, the cheeks and the chin. These locations are determined as follows:
    於額頭、雙頰及下巴劃定四個測量區域,其定位方式如下:

    a) Connect the two eye centres (feature points 12.1 and 12.2 from ISO/IEC 14496-2:2004, Annex C). The IED is the length of the connecting line H H HH. The point M M MM is the midpoint of this line.
    a) 連接兩眼中心點(ISO/IEC 14496-2:2004 附錄 C 中的特徵點 12.1 和 12.2)。IED 即為此連接線 H H HH 的長度。點 M M MM 為此線段的中點。

    b) Connect M with the mouth midpoint (feature point 2.3 from ISO/IEC 14496-2:2004, Annex C). EMD is the length of the connecting line V. Note that the two lines do not need to be rectangular.
    b) 將 M 點與嘴巴中點相連接(ISO/IEC 14496-2:2004 附錄 C 中的特徵點 2.3)。EMD 即為連接線 V 的長度。請注意,這兩條線段不需要呈直角。

    c) MP , the side length of the four squared measurement zones, is defined to be 0,3 IED.
    c) MP 為四個方形量測區域的邊長,其定義為 0.3 倍 IED。

    d) The centre of the forehead measurement zone F is located at a distance of 0,5 EMD upwards from M on V .
    d) 前額量測區域 F 的中心點位於 V 線上,距離 M 點向上 0.5 倍 EMD 的位置。

    e) The centre of the chin measurement zone C is located at a distance of 1,5 EMD downwards from M on V .
    e) 下巴量測區 C 的中心點位於 M 點沿 V 軸向下 1.5 倍 EMD 距離處。

    f) The top left corner of the right (from the capture subject) cheek measurement zone R is located at a distance of 0,5 EMD downwards from M on V and 0,5 IED to the left of M on H (looking at the capture subject).
    f) 右側(從拍攝對象角度)臉頰量測區 R 的左上角位於 M 點沿 V 軸向下 0.5 倍 EMD 距離處,以及 M 點沿 H 軸向左 0.5 倍 IED 距離處(以拍攝對象視角)。

    g) The top right corner of the left (from the capture subject) cheek measurement zone L L LL is located at a distance of 0,5 EMD downwards from M on V and 0,5 IED to the right of M on H (looking at the capture subject).
    g) 左側(從拍攝對象角度)臉頰量測區 L L LL 的右上角位於 M 點沿 V 軸向下 0.5 倍 EMD 距離處,以及 M 點沿 H 軸向右 0.5 倍 IED 距離處(以拍攝對象視角)。
  2. For all colour channels, measure the mean intensity values MI for the measurement zones F , C F , C F,C\mathrm{F}, \mathrm{C}, L and R .
    針對所有色彩通道,量測 F , C F , C F,C\mathrm{F}, \mathrm{C} 、L 與 R 量測區的平均亮度值 MI。
  3. For all channels separately, the lowest MI (of F, C, L and R) in that channel shall not be lower than 50 % of the highest MI (of F, C, L and R).
    對於所有通道分別而言,該通道中 F、C、L 和 R 的最低 MI 值不得低於最高 MI 值的 50%。

Figure D. 14 - Location and size of the intensity measurement zones
圖 D.14 - 光強測量區域的位置與尺寸
The measures for the illumination intensity and requirements to them are listed in Table D.6.
照明強度的測量方法及其相關要求列於表 D.6。
Table D. 6 - Measures for the illumination intensity compliance check
表 D.6 - 照明強度合規檢查的測量方法
Term  術語 Description  描述 Requirement  要求
12.1 Feature point at left eye centre
左眼中心特徵點
12.2 Feature point at right eye centre
右眼中心特徵點
H Line connecting 12.1 and 12.2
連接 12.1 和 12.2 的線段
IED Length of H between 12.1 and 12.2
12.1 和 12.2 之間的 H 長度
IED 90 90 >= 90\geq 90 pixels  IED 90 90 >= 90\geq 90 像素
M Midpoint of H between 12.1 and 12.2
12.1 和 12.2 之間 H 的中點
2.3 Feature point at mouth centre (with closed mouth the same as 2.2)
嘴部中心特徵點(閉嘴時與 2.2 相同)
V Line connecting M and 2.3, V and H do not need to be orthogonal
連接 M 和 2.3 的線段,V 和 H 不需正交
EMD Length of V between M and 2.3
M 與 2.3 之間的 V 長度
MP Side length of the squared measurement zones
方形測量區域的邊長
MP = 0 , 3 IED MP = 0 , 3 IED MP=0,3IED\mathrm{MP}=0,3 \mathrm{IED}
F Forehead measurement zone, located at a distance of 0,5 EMD upwards from M on V
額頭測量區域,位於從 V 線上的 M 點向上 0.5 EMD 距離處
C Chin measurement zone C , located at a distance of 1,5 EMD downwards from M on V
下巴測量區域 C,位於從 V 線上的 M 點向下 1.5 EMD 距離處
R Right (from the capture subject) cheek measurement zone R, its top left corner is located at a distance of 0,5 EMD downwards from M on V and 0,5 IED to the left of M on H
右側(從拍攝主體角度)臉頰測量區域 R,其左上角位於從 V 線上的 M 點向下 0.5 EMD 距離處,以及 H 線上的 M 點向左 0.5 IED 距離處
Term Description Requirement 12.1 Feature point at left eye centre 12.2 Feature point at right eye centre H Line connecting 12.1 and 12.2 IED Length of H between 12.1 and 12.2 IED >= 90 pixels M Midpoint of H between 12.1 and 12.2 2.3 Feature point at mouth centre (with closed mouth the same as 2.2) V Line connecting M and 2.3, V and H do not need to be orthogonal EMD Length of V between M and 2.3 MP Side length of the squared measurement zones MP=0,3IED F Forehead measurement zone, located at a distance of 0,5 EMD upwards from M on V C Chin measurement zone C , located at a distance of 1,5 EMD downwards from M on V R Right (from the capture subject) cheek measurement zone R, its top left corner is located at a distance of 0,5 EMD downwards from M on V and 0,5 IED to the left of M on H | Term | Description | Requirement | | :--- | :--- | :--- | | 12.1 | Feature point at left eye centre | | | 12.2 | Feature point at right eye centre | | | H | Line connecting 12.1 and 12.2 | | | IED | Length of H between 12.1 and 12.2 | IED $\geq 90$ pixels | | M | Midpoint of H between 12.1 and 12.2 | | | 2.3 | Feature point at mouth centre (with closed mouth the same as 2.2) | | | V | Line connecting M and 2.3, V and H do not need to be orthogonal | | | EMD | Length of V between M and 2.3 | | | MP | Side length of the squared measurement zones | $\mathrm{MP}=0,3 \mathrm{IED}$ | | F | Forehead measurement zone, located at a distance of 0,5 EMD upwards from M on V | | | C | Chin measurement zone C , located at a distance of 1,5 EMD downwards from M on V | | | R | Right (from the capture subject) cheek measurement zone R, its top left corner is located at a distance of 0,5 EMD downwards from M on V and 0,5 IED to the left of M on H | |
Table D. 6 (continued)
表 D. 6(續)
Term  術語 Description  描述 Requirement  要求
L Left (from the capture subject) cheek measurement zone L, its top right corner is located at a distance of 0,5 EMD downwards from M on V and 0,5 IED to the right of M on H
左側(從拍攝對象角度)臉頰測量區域 L,其右上角位於垂直線 V 上距離 M 點向下 0.5 EMD 處,以及水平線 H 上 M 點向右 0.5 IED 處
MI Mean intensity value measured for every channel separately
針對每個通道分別測量的平均強度值
max 2 × min 2 × min <= 2xx min\leq 2 \times \min (per channel)  各通道最大值 2 × min 2 × min <= 2xx min\leq 2 \times \min
Term Description Requirement L Left (from the capture subject) cheek measurement zone L, its top right corner is located at a distance of 0,5 EMD downwards from M on V and 0,5 IED to the right of M on H MI Mean intensity value measured for every channel separately max <= 2xx min (per channel)| Term | Description | Requirement | | :--- | :--- | :--- | | L | Left (from the capture subject) cheek measurement zone L, its top right corner is located at a distance of 0,5 EMD downwards from M on V and 0,5 IED to the right of M on H | | | MI | Mean intensity value measured for every channel separately | max $\leq 2 \times \min$ (per channel) |
While it is understood that massive shadows on parts of the face will obscure face details important for identification, having no shadows at all will result in a non-natural appearance. In such a case, the face will appear flat and without surface features. Appropriate shadows help distinguish the shape of the nose, eye areas, forehead, cheeks, chin and so on. Furthermore, lighting and shadows are necessary to show details around the eyes, wrinkles and scars. There shall not be extreme dark shadow visible on the face, especially around the nose, in the eye sockets, around the mouth, and between mouth and chin that obscure face details important for inspection. The brightness shall be nearly the same on both sides of the face, left and right. All features in the face shall be clearly recognizable, and the volume effect especially around nose and eyes shall render the reality, see Figure D.15.
雖然理解臉部某些區域的大面積陰影會遮蔽對識別至關重要的面部細節,但完全沒有陰影會導致不自然的外觀。在這種情況下,臉部會顯得平坦且缺乏立體特徵。適當的陰影有助於區分鼻子、眼部區域、額頭、臉頰、下巴等部位的輪廓。此外,光影效果對於呈現眼部周圍細節、皺紋和疤痕是必要的。面部不得出現極度暗沉的陰影,特別是鼻子周圍、眼窩、嘴巴周圍以及下巴與嘴部之間,這些陰影不應遮蔽檢查所需的重要面部細節。臉部左右兩側的亮度應近乎一致。所有面部特徵都必須清晰可辨,尤其是鼻子和眼睛周圍的立體效果應真實呈現,詳見圖 D.15。
EXAMPLE To comply with this requirement, the illumination elements can be aligned in an angle of approximately 35 35 35^(@)35^{\circ} off the axis between camera lens and face centre. Descriptions of sample illumination layouts are given in E.1.
範例 為符合此要求,照明元件可調整至與相機鏡頭和臉部中心軸線呈約 35 35 35^(@)35^{\circ} 角度的位置。範例照明配置說明參見附錄 E.1。


a) Side illumination  a) 側面照明

b) Top illumination  b) 頂部照明

c) Bottom illumination  c) 底部照明
Figure D. 15 - Examples for non-compliant illumination
圖 D.15 - 不符合規範的照明範例
Flashes should only be used for indirect illumination. Issuers may exclude the use of flashes. If face portraits are captured using flashes, care should be taken to verify that the eyes of the subject are open. As long as the requirements for the face portrait from D.1.4 are maintained, one or more flashes or a large surface flash may be used. There shall not be any shadows at the face or in the background of the face portrait that obscure face details important for inspection. Illumination shall not cause any red eye effect visible in the eyes or other lighting artefacts such as spots from a ring flash reducing the visibility of the eyes.
閃光燈僅限用於間接照明。發證機關可排除使用閃光燈。若使用閃光燈拍攝人像照片,應注意確認拍攝對象的眼睛是睜開的。只要維持 D.1.4 節對人像照片的要求,可使用一個或多個閃光燈或大型表面閃光燈。人像臉部或背景中不得有任何陰影遮蔽對檢查重要的臉部細節。照明不得造成眼睛可見的紅眼效應或其他光學瑕疵(如環形閃光燈造成的斑點)而降低眼睛的可見度。
A high colour rendering index is recommended for illumination. See D.1.4.2.9 for details.
建議使用高色彩再現指數的照明光源。詳見 D.1.4.2.9 節說明。

The captured image shall contain minimal reflections or bright spots. Diffused lighting, multiple balanced sources or other appropriate lighting methods should be used. A single bare point light source like a camera mounted direct flash shall not be used for imaging. Lamp reflectors or other technologies that provide non-point illumination should be used.
拍攝的影像應盡量減少反光或亮點。應使用漫射照明、多個平衡光源或其他適當的照明方法。不應使用單一裸露的點光源(如相機直接安裝的閃光燈)進行拍攝。應使用燈具反射器或其他能提供非點光源照明的技術。

D.1.4.2.7 Contrast  D.1.4.2.7 對比度

For each patch of skin on the capture subject’s face, the gradations in textures shall be clearly visible, i.e., being of reasonable contrast. Whites of eyes shall be clearly light or white (when appropriate) and
對於拍攝對象臉部的每一塊皮膚區域,紋理的漸變應清晰可見,即具有合理的對比度。眼白部分應明顯明亮或呈現白色(如適用),而

dark hair or face features (when appropriate) shall be clearly dark. Generally, the face portrait shall have appropriate brightness and good contrast between face, hair and background. See Figure D.16.
深色頭髮或臉部特徵(如適用)應明顯呈現深色。一般而言,臉部肖像應具有適當的亮度,且臉部、頭髮與背景之間應有良好的對比度。參見圖 D.16。

Figure D. 16 - Examples for compliant and non-compliant exposure
圖 D.16 - 符合與不符合曝光規範的範例

D.1.4.2.8 Dynamic range  D.1.4.2.8 動態範圍

The dynamic range of the image should have at least 50 % 50 % 50%50 \% of intensity variation in the face region of the image. The face region is defined as the region from crown to chin and from the left ear to the right ear. This recommendation may require an adjustment of the equipment settings on an individual basis when the skin tone is excessively light or dark. In the rectangle between the ISO/IEC 14496-2 feature points:
影像的動態範圍在臉部區域應至少具有 50 % 50 % 50%50 \% 的強度變化。臉部區域定義為從頭頂到下巴,以及從左耳到右耳的範圍。當膚色過淺或過深時,可能需要根據個別情況調整設備設定。在 ISO/IEC 14496-2 特徵點之間的矩形區域內:
  • 2.1: Bottom of the chin,
    2.1:下巴底部,
  • 10.9: Upper contact point between left ear and face,
    10.9:左耳與臉部的上接觸點,
  • 10.10: Upper contact point between right ear and face, and
    10.10:右耳與臉部的上接觸點,以及
  • 11.1: Middle border between hair and forehead.
    11.1:頭髮與額頭的中間邊界。
All colour channels should have at least 50 % 50 % 50%50 \% of intensity variation. As this may be difficult to achieve, best efforts should be made to get as close as possible to that requirement. See Figure D. 17 for an illustration of the recommended measuring zone and Figure D. 18 for examples of good and bad quality images.
所有色彩通道的強度變化至少應達到 50 % 50 % 50%50 \% 。由於可能難以達成此要求,應盡最大努力接近該標準。建議測量區域的示意圖請參見圖 D.17,優質與劣質影像範例請參見圖 D.18。

Figure D. 17 - Recommended dynamic range measuring zone
圖 D.17 - 建議動態範圍測量區域


a) Compliant face portrait
a) 符合規範的臉部肖像


b) Too low dynamic range
b) 動態範圍過低
Figure D. 18 - Compliant and non-compliant dynamic range
圖 D.18 - 符合與不符合規範的動態範圍

D.1.4.2.9 Colour  D.1.4.2.9 色彩

All images should be captured in colour. Newly designed enrolment processes should capture colour images only.
所有影像應以彩色方式擷取。新設計的註冊流程應僅擷取彩色影像。
The captured face portrait shall be a true-colour representation of the holder in a typical colour space such as sRGB as specified in IEC 61966-2. Other true-colour representations may be used as long as the colour profile is embedded in the image.
所擷取的人臉肖像應為持有人在典型色彩空間(如 IEC 61966-2 所規範的 sRGB)中的真實色彩呈現。只要影像中嵌入了色彩描述檔,亦可使用其他真實色彩呈現方式。
The sensor of the camera shall capture the entire visible wavelength, basically the wavelength between 400 nm and 700 nm . It allows rendering correctly the natural colours seen by humans. Unnaturally coloured lighting, i.e., yellow, red, etc., shall not be used. Care should be taken to correct the white balance of image capture devices. The lighting shall produce a face image with naturally looking flesh tones when viewed in typical examination environments. See Figure D.19.
相機的感測器應能捕捉整個可見光波長範圍,基本上就是 400 奈米至 700 奈米之間的波長。這能正確呈現人類肉眼所見的自然色彩。不應使用非自然色調的光源,例如黃光、紅光等。需注意校正影像擷取裝置的白平衡。在典型檢視環境下觀看時,光源應能產生膚色自然的人臉影像。詳見圖 D.19。

Figure D. 19 - Examples for compliant and non-compliant colour setups
圖 D. 19 - 符合與不符合規範的色彩設定範例
The RGB values from the capturing device should be converted to an appropriate RGB space as required by the data format.
擷取裝置的 RGB 數值應根據資料格式要求轉換至適當的 RGB 色彩空間。
Dedicated near infra-red cameras shall not be used for image acquisition.
專用近紅外線攝影機不得用於影像擷取。

Colour calibration using an 18 % grey background or other method such as white balancing should be applied.
應使用 18%灰色背景或其他方法(如白平衡)進行色彩校準。
White balance shall be properly set in order to achieve high fidelity skin tones. Quality assurance measurements of light conditions and camera system response should be made when a recommended CIE Standard Illuminant D65 illuminant (see ISO 11664-2) or a similar continuous spectrum daylight illuminant and a camera and/or camera control software are used to take pictures. In practice it is necessary to reduce the ambient light emanating from uncontrolled daylight sources, fluorescent or similar light sources and reflections from surfaces.
白平衡應正確設定以實現高保真膚色。當使用推薦的 CIE 標準光源 D65(參見 ISO 11664-2)或類似連續光譜日光光源,以及相機和/或相機控制軟體拍攝時,應對光照條件和相機系統響應進行品質保證測量。實際操作中必須減少來自非受控日光光源、螢光燈或類似光源以及表面反射的環境光。
Imaging fidelity measurements for photo studio and stationary registration office installations may be done either using a light spectrum analyser to define the spectral characteristics of the illuminants or analysing measurement target images using software applications.
攝影棚及固定登記處的安裝環境中,可透過光譜分析儀定義光源的光譜特性,或使用軟體應用程式分析測量目標影像來進行成像保真度測量。
Annex E. 3 contains a methodology for measuring colour quality and recommended values.
附錄 E.3 包含測量色彩品質的方法及建議數值。

Colour quality should be measured in terms of colour error using the CIEDE2000 formula (deltaE2000) of a standardized test pattern according to the methodology in E.3. The average deltaE2000 of all colour patches should not exceed 4 for scanners and 10 for camera systems. The maximum deltaE2000 for any colour patch should not exceed 15 for scanners and 20 for camera systems. Measured CIELAB Lab* human skin tone a a a^(**)a^{*} and b b b^(**)b^{*} values shall be positive as shown in E.3. See References [25] and [26] for explanations. Negative a a a^(**)a^{*} and b b b^(**)b^{*} values are acceptable for medical reasons only.
色彩品質應根據 E.3 所述方法,使用標準化測試圖案的 CIEDE2000 公式(deltaE2000)以色彩誤差進行測量。掃描器的所有色塊平均 deltaE2000 不得超過 4,攝影系統不得超過 10。任何單一色塊的最大 deltaE2000 值,掃描器不得超過 15,攝影系統不得超過 20。測得的 CIELAB Lab*人體膚色 a a a^(**)a^{*} b b b^(**)b^{*} 數值應如 E.3 所示為正值。詳細說明請參閱參考文獻[25]與[26]。僅在醫療原因下可接受負值的 a a a^(**)a^{*} b b b^(**)b^{*} 數值。

D.1.4.2.10 Noise  D.1.4.2.10 雜訊

The enrolment should be made in a controlled scene; the picture should be captured with high signal-to-noise ratio. Noise is not information contained in the original scene but is created by the electronics due to a too high level of amplification. ISO sensitivity settings at values of ISO 100 and ISO 200 typically
採集應在受控場景中進行;影像擷取需具備高信噪比。雜訊並非原始場景所含資訊,而是因電子元件過度放大所產生。通常建議將 ISO 感光度設定為 ISO 100 與 ISO 200 數值。

reduce noise; for high-quality cameras ISO 400 and ISO 800 may also be used. Noise can be minimized by correct exposure at a low ISO setting.
降低雜訊;對於高品質相機,亦可使用 ISO 400 和 ISO 800。透過在低 ISO 設定下正確曝光,可將雜訊降至最低。
The ratio of signal to noise (SNR) is one indicator of the overall ability of a collection system to accurately capture a subject’s appearance. Unwanted variations in the response of an imaging system (i. e., noise) are inherent in the capture process of a digital representation of a physical scene and arise from the interplay between the system components (e.g., sensor and lens) and the capture environment (e. g., subject illumination). Reducing overall noise to improve the SNR benefits human examiners and automated face analysis systems which rely on high-quality subject images. SNR should be computed as prescribed in ISO 15739:2017, 4.7, which incorporates a human visual model to calculate the human observable (i. e., perceived) SNR of the overall collection system.
信噪比(SNR)是評估採集系統準確捕捉主體外觀能力的指標之一。成像系統中不期望的響應變化(即雜訊)是數位化實體場景捕捉過程中固有的現象,源自系統組件(如感測器和鏡頭)與拍攝環境(如主體照明)之間的交互作用。降低整體雜訊以提高信噪比,有助於依賴高品質主體影像的人眼檢視員與自動臉部分析系統。信噪比應按照 ISO 15739:2017 第 4.7 節規定計算,該標準採用人類視覺模型來計算整個採集系統的人眼可觀察(即可感知)信噪比。
Commercial software designed for use by photo studios and registration office imaging systems are available with accompanying standard test targets for computing SNR.
市面上有專為攝影工作室和登記處成像系統設計的商業軟體,並附帶用於計算信噪比的標準測試標靶。

D.1.4.2.11 Filters  D.1.4.2.11 濾鏡

Polarization filters shall not be used in front of the light sources. Linear polarization filters shall not be used in front of the camera lens as they interfere with autofocus cameras and thus reduce or remove skin texture information which might be used by face image comparison algorithms. Circular polarizing filters decrease reflections that show up in eyeglasses and may be used in front of the camera lens.
光源前方不得使用偏光濾鏡。線性偏光濾鏡不可安裝於攝影鏡頭前方,因其會干擾自動對焦相機運作,進而減少或消除臉部影像比對演算法可能使用的皮膚紋理資訊。圓形偏光濾鏡可減少眼鏡產生的反光,允許安裝於攝影鏡頭前方。

D.1.4.3 Subject conditions
D.1.4.3 受測者條件

D.1.4.3.1 Pose  D.1.4.3.1 姿勢

The subject should be instructed to look directly at the camera and to keep his or her head erect. Typically people are able to adopt such a position if instructed. Care should be taken to maintain the full frontal pose as well as possible. See Figure D.20.
應指示受測者直視鏡頭並保持頭部直立。多數人在接受指示後皆能調整至此姿勢。須盡可能維持完整正面姿勢,詳見圖 D.20。

Figure D. 20 - Pose examples
圖 D. 20 - 姿勢範例

The shoulders shall be square on to the camera, parallel to the camera imaging plane. Portrait style photographs where the subject is looking over the shoulder shall not be used. See Figure D.21.
雙肩應正對相機,與相機成像平面平行。不得使用被攝者回頭張望的肖像風格照片。詳見圖 D.21。

Figure D. 21 - Pose examples
圖 D. 21 - 姿勢範例
The pitch of the head shall be less than ± 5 ± 5 +-5^(@)\pm 5^{\circ} from frontal. The yaw of the head shall be less than ± 5 ± 5 +-5^(@)\pm 5^{\circ} from frontal. The roll of the head shall be less than ± 8 ± 8 +-8^(@)\pm 8^{\circ}, it is recommended to keep it below ± 5 ± 5 +-5^(@)\pm 5^{\circ}. Any stronger pose deviation may have negative impact on face recognition error rates. Therefore, effort should be spent to ensure that all angles are as small as possible. See Table D.7. For an illustration of the angles see Figure 3. For samples showing correct pose and pose deviations see Figure D.22.
頭部俯仰角度應小於 ± 5 ± 5 +-5^(@)\pm 5^{\circ} (相對於正面)。頭部偏轉角度應小於 ± 5 ± 5 +-5^(@)\pm 5^{\circ} (相對於正面)。頭部傾斜角度應小於 ± 8 ± 8 +-8^(@)\pm 8^{\circ} ,建議保持在 ± 5 ± 5 +-5^(@)\pm 5^{\circ} 以下。任何過大的姿勢偏差都可能對人臉辨識錯誤率產生負面影響。因此應盡量確保所有角度都維持在最小範圍。詳見表 D.7。角度示意圖請參閱圖 3。正確姿勢與姿勢偏差範例請參閱圖 D.22。
Table D. 7 - Pose angle requirements and recommendations
表格 D.7 - 姿態角度要求與建議
Criterion:  準則: Requirement  要求 Pitch ± 5 ± 5 <= +-5^(@)\leq \pm 5^{\circ}, yaw ± 5 ± 5 <= +-5^(@)\leq \pm 5^{\circ}, roll ± 8 ± 8 <= +-8^(@)\leq \pm 8^{\circ}
俯仰角 ± 5 ± 5 <= +-5^(@)\leq \pm 5^{\circ} ,偏航角 ± 5 ± 5 <= +-5^(@)\leq \pm 5^{\circ} ,滾轉角 ± 8 ± 8 <= +-8^(@)\leq \pm 8^{\circ}
Pose angle  姿勢角度 Best Practice  最佳實務 Pitch ± 5 ± 5 <= +-5^(@)\leq \pm 5^{\circ}, yaw ± 5 ± 5 <= +-5^(@)\leq \pm 5^{\circ}, roll ± 5 ± 5 <= +-5^(@)\leq \pm 5^{\circ}
俯仰角 ± 5 ± 5 <= +-5^(@)\leq \pm 5^{\circ} ,偏航角 ± 5 ± 5 <= +-5^(@)\leq \pm 5^{\circ} ,滾轉角 ± 5 ± 5 <= +-5^(@)\leq \pm 5^{\circ}
Criterion: Requirement Pitch <= +-5^(@), yaw <= +-5^(@), roll <= +-8^(@) Pose angle Best Practice Pitch <= +-5^(@), yaw <= +-5^(@), roll <= +-5^(@)| Criterion: | Requirement | Pitch $\leq \pm 5^{\circ}$, yaw $\leq \pm 5^{\circ}$, roll $\leq \pm 8^{\circ}$ | | :--- | :--- | :--- | | Pose angle | Best Practice | Pitch $\leq \pm 5^{\circ}$, yaw $\leq \pm 5^{\circ}$, roll $\leq \pm 5^{\circ}$ |
Figure D. 22 - Pose angle examples
圖 D. 22 - 姿勢角度範例

D.1.4.3.2 Expression  D.1.4.3.2 表情

The face shall have a neutral expression; in particular the capture subject shall not smile. The mouth shall be closed; the teeth shall not be visible. A smile is not allowed, even with closed jaw. The eyebrows shall not be raised. Squinting and frowning shall not be visible. See Figure D. 23.
臉部應保持中性表情;特別是拍攝對象不得微笑。嘴巴應閉合;牙齒不可露出。即使下顎閉合也不允許微笑。眉毛不得上揚。不可出現瞇眼或皺眉表情。詳見圖 D.23。

Figure D. 23 - Expression examples
圖 D.23 - 表情範例
The mouth is considered to be closed if the distance A between the inner borders of the lips (distance between feature points 2.2 and 2.3) is less than 50 % of the thickness of the lower lip B (distance between the feature points 2.3 and 8.2). See Figure D.24.
當嘴唇內緣間距 A(特徵點 2.2 與 2.3 間距)小於下唇厚度 B(特徵點 2.3 與 8.2 間距)的 50%時,即視為嘴巴閉合。詳見圖 D.24。

Figure D. 24 - Definition of a closed mouth
圖 D. 24 - 閉口定義

D.1.4.3.3 Eye visibility
D.1.4.3.3 眼睛可見度

Both eyes shall be opened naturally, but not forced wide-opened. Pupils and irises, including iris colour, shall be completely visible, although there may be exceptions due to ethnicity or other individually specific reasons. The eyes shall look into the camera unless there are medical conditions preventing this. There should not be strong shadows in the eye-sockets. See Figure D. 25 for examples.
雙眼應自然睜開,但不可強制睜大。瞳孔與虹膜(包括虹膜顏色)應完全可見,但可能因種族或其他個人特殊原因存在例外情況。除非有醫療狀況限制,眼睛應直視鏡頭。眼窩處不應有強烈陰影。範例請參見圖 D. 25。

Figure D. 25 - Eye visibility examples
圖 D. 25 - 眼睛可見度範例
Any lighting artefacts present on the region of the eyes shall not obscure eye details such that identification becomes difficult. Lighting artefacts shall not be larger than 15 % 15 % 15%15 \% of the area of the iris. If there are unacceptable reflections, the illumination should be relocated appropriately. The pitch shall not be increased by moving the head forward.
眼睛區域出現的任何光線效果不應遮蔽眼部細節,以免造成辨識困難。光線效果的面積不得超過虹膜區域的 15 % 15 % 15%15 \% 。若出現不可接受的反射,應適當調整照明位置。不得透過將頭部前傾來增加俯仰角度。
Examples of setups preventing or at least reducing lighting artefacts are given in E.1.
附錄 E.1 提供了防止或至少減少光線效果的設置範例。

The eye visibility zone (EVZ) is defined as the covering rectangle having a distance V of at least 5 % 5 % 5%5 \% of the IED to any part of the visible eye ball. Figure D. 26 indicates the distance V of the covering rectangle to the visible parts of the eye ball. The EVZ shall be completely visible and unobscured.
眼睛可視區域(EVZ)定義為覆蓋矩形,其距離 V 至少為 IED 的 5 % 5 % 5%5 \% 至眼球可見部分的任何位置。圖 D.26 顯示了覆蓋矩形與眼球可見部分的距離 V。EVZ 必須完整可見且無遮蔽。

Figure D. 26 - Illustration of the EVZ
圖 D.26 - EVZ 示意圖
Contact lenses changing the appearance of the iris including the size and the shape shall not be worn. The pattern of the lens shall not exceed the limbus.
禁止配戴會改變虹膜外觀(包括大小和形狀)的隱形眼鏡。鏡片圖案不得超出角膜緣範圍。

D.1.4.3.4 Accessories: Glasses
D.1.4.3.4 配件:眼鏡

If glasses are permitted by the issuer, subjects may wear glasses during image capture if they typically do so. Glasses other than those worn due to ametropia shall not be worn. Reading glasses shall not be worn during image capture. The lens area of glasses shall be made of fully transparent material. Tinted glasses, sunglasses, and glasses with polarization filters shall not be worn. An exception applies when the subject asserts a medical reason to retain glasses which are not fully transparent. If glasses are worn that tint automatically under illumination, they shall be photographed without tint by tuning the direct illumination or background lighting. In cases where the tint cannot be reduced the glasses shall
若發行機構允許配戴眼鏡,受測者可在影像擷取時配戴日常使用的眼鏡。非因屈光不正所需配戴的眼鏡不得使用。擷取影像時不得配戴老花眼鏡。眼鏡鏡片區域須採用全透明材質。禁止配戴有色鏡片、太陽眼鏡及具偏光濾鏡的眼鏡。例外情況是當受測者聲明有醫療需求必須保留非全透明眼鏡時。若配戴會因光照自動變色的眼鏡,應透過調整直接照明或背景光源使鏡片呈現無色狀態拍攝。當無法減輕變色效果時

be removed or the subject should be asked to use other glasses. See Figure D.27. A circled yellow “P” in the Figure indicates compliance depending on the acceptance policy of the issuer.
應要求受測者移除眼鏡或更換其他眼鏡。參見圖 D.27。圖中黃色圓圈標示的「P」字樣,表示符合與否取決於發行機構的接受政策。

Figure D. 27 - Examples for compliance of glasses
圖 D.27 - 眼鏡合規範例
Any lighting artefacts present on the region of the glasses shall not obscure eye details such that identification becomes difficult. Glasses may be repositioned to eliminate lighting artefacts, but frames shall not obscure eye details. The pitch shall not be increased by moving the head forward.
眼鏡區域出現的任何光影效果不得遮蔽眼部細節以致難以辨識。可調整眼鏡位置以消除光影效果,但鏡框不得遮蓋眼部細節。不得透過頭部前傾來增加俯仰角度。
Rims and frames of glasses shall not obscure the eyes as well as the EVZ. The irises of both eyes shall be visible to the same extent as without glasses. Frames should not be thicker than 5 % 5 % 5%5 \% of the IED (typically 3 4 mm 3 4 mm 3-4mm3-4 \mathrm{~mm} ). A subject wearing heavier frames should be asked to use other glasses or to remove their glasses.
眼鏡的鏡框不得遮蔽眼睛及 EVZ 區域。雙眼虹膜可見範圍應與未戴眼鏡時相同。鏡框厚度不應超過 IED 的 5 % 5 % 5%5 \% (通常為 3 4 mm 3 4 mm 3-4mm3-4 \mathrm{~mm} )。若受測者佩戴較厚重鏡框,應要求其更換其他眼鏡或摘除眼鏡。

D.1.4.3.5 Accessories: Head coverings
D.1.4.3.5 配件:頭部覆蓋物

The region of the face, from the crown to the base of the chin, and from ear-to-ear, shall be clearly visible. Special care shall be taken in cases when veils, scarves or head covering cannot be removed for religious reasons to ensure these coverings do not obscure any face features and do not generate shadow. Head coverings shall not be accepted except in circumstances specifically approved by the issuing state of the MRTD. Such circumstances may be religious, medical or cultural. If head coverings are allowed, they shall be firm fitting and of a plain uniform colour with no pattern and no visible perforations and the region between hair lines, both forwards of the ears and chin including cheeks, mouth, eyes, and eyebrows shall be visible without any distortion or shadows. For examples, see Figure D.28.
臉部區域,從頭頂到下巴底部,以及兩耳之間,必須清晰可見。若因宗教因素無法移除面紗、圍巾或頭部覆蓋物時,應特別注意確保這些覆蓋物不會遮蔽任何臉部特徵且不產生陰影。除護照簽發國特別核准的情況外,頭部覆蓋物原則上不予接受。此類特殊情況可能基於宗教、醫療或文化因素。若允許佩戴頭部覆蓋物,其應貼合頭部、採用單一素色無花紋且無可見穿孔,並使髮際線間區域(含雙耳前方至下巴的臉頰、嘴巴、眼睛及眉毛)清晰可見,無扭曲或陰影。範例請參見圖 D.28。
The elliptically shaped region between the following face feature points as defined in ISO/IEC 14496-2 shall be visible without any intensive shadows:
依據 ISO/IEC 14496-2 定義,以下臉部特徵點之間的橢圓形區域應清晰可見且無明顯陰影:
  • 2.1: Bottom of the chin,
    2.1:下巴底部,
  • 10.9: Upper contact point between left ear and face,
    10.9:左耳與臉部上緣接觸點,
  • 10.10: Upper contact point between right ear and face, and
    10.10:右耳與臉部的上接觸點,以及
  • 11.1: Middle border between hair and forehead.
    11.1:髮際線與額頭的中間邊界。
An issuer may or may not require that the ears are visible. The capture process should minimize shadows and obscuration of features in the face region. This might involve adjustment of the head coverings. See Figure D.28.
發行機構可自行決定是否要求耳朵必須可見。拍攝過程應盡量減少臉部區域特徵的陰影與遮蔽,這可能涉及頭部遮蓋物的調整。詳見圖 D.28。


a) Compliance depends on issuer policy
a) 合規與否取決於發行機構的政策


d) Non-uniform head covering
d) 非均勻頭部覆蓋物


b) Compliance depends on issuer policy
b) 合規性取決於發行機構政策


c) Face not completely visible
c) 臉部未完全可見


f) Low background contrast
f) 背景對比度不足

Figure D. 28 - Head covering examples 

D.1.4.3.6 Accessories: Face ornamentation 

Face ornamentation which obscures the face shall not be present. Concerning face ornaments not obscuring the face, the issuer may use its discretion as to the extent to which face ornaments may appear in the face portrait. In any case, only permanently worn face ornaments may appear in the face portrait. See Figure D.29. 
Figure D. 29 - Face ornamentation examples 

D.1.4.3.7 Style: Make-up, hair style 

People usually try to look better than normal in an ID photo. In some extreme cases an excessive use of make-up affects computerized as well as human face recognition capabilities. Therefore, the subject should only wear typical every day make-up. 
There shall be no dirt visible on the face in a captured face portrait. It should be considered that dermatological problems could cause skin properties that look like dirt. The hair of the subject shall not cover any part of the eyes. The hair should not cover any part of the EVZ. See Figure D.30. Eye patches shall not be worn unless required for a medical reason. 
Figure D. 30 - Hair style examples 

D.1.4.4 Face portrait dimensions and head location 

The head shall be centred in the final face portrait as described in this clause. The referenced feature points are defined in ISO/IEC 14496-2. See Figure 9, Figure D.32, and Table D.8. 
The image width A to image height B aspect ratio should be between 74 % 74 % 74%74 \% and 80 % 80 % 80%80 \%. The imaginary line H is defined as the (almost horizontal) line through the eye centres of the left (feature point 12.1) and the right eye (feature point 12.2). 
The centre of H is the face midpoint M . The horizontal distance M h M h M_(h)\mathrm{M}_{\mathrm{h}} between the left image border and M shall be between 45 % 45 % 45%45 \% and 55 % 55 % 55%55 \% of A . The vertical distance M v M v M_(v)\mathrm{M}_{\mathrm{v}} between the top image border and M shall be between 30 % 30 % 30%30 \% and 50 % 50 % 50%50 \% of B . The mouth centre (feature point 2.3 ) and M define the imaginary (almost vertical) line V . Note, that V and H are not necessarily perpendicular. 
The head width W is defined as the distance between the two imaginary lines parallel to the line V ; each imaginary line is drawn between the upper and lower lobes of each ear (feature points 10.2/10.6 for the right and 10.1 / 10.5 10.1 / 10.5 10.1//10.510.1 / 10.5 for the left ear). The W to A ratio shall be between 50 % 50 % 50%50 \% and 75 % 75 % 75%75 \%. This constraint is more important than including the entire hairline in the photograph for subjects with large hair volume. 
The head length L is defined as the distance between the base of the chin (feature point 2.1) and the crown (feature point 11.4) measured on the imaginary line V. If these feature points are not exactly located at V , the vertical projection of them to V shall be used. The L to B ratio shall be between 60 % 60 % 60%60 \% and 90 % 90 % 90%90 \%. 
Often, the location of crown, chin or ears cannot be determined precisely. In such a case, a good guess shall be made.
通常,頭頂、下巴或耳朵的位置無法精確確定。在這種情況下,應做出合理的推測。
For examples see Figure D.31.
範例請參見圖 D.31。

Figure D. 31 - Face portrait dimensions and head location examples
圖 D. 31 - 臉部肖像尺寸與頭部位置範例
Table D. 8 - Geometric face portrait requirements
表格 D.8 - 臉部肖像幾何要求
Term  術語 Description  描述 Requirement  要求
A Image width  影像寬度
B Image height  影像高度 74 % A / B 80 % 74 % A / B 80 % 74% <= A//B <= 80%74 \% \leq \mathrm{A} / \mathrm{B} \leq 80 \%
H Line through the centres of the left (feature point 12.1) and the right eye (feature point 12.2)
通過左眼(特徵點 12.1)與右眼(特徵點 12.2)中心點的連線
M Face centre (midpoint of H)
臉部中心(H 線段的中點)
M h M h M_(h)\mathrm{M}_{\mathrm{h}} Distance from the left side of the image to M
影像左側至 M 點的距離
45 % M h / A 55 % 45 % M h / A 55 % 45% <= M_(h)//A <= 55%45 \% \leq \mathrm{M}_{\mathrm{h}} / \mathrm{A} \leq 55 \%
M v M v M_(v)\mathrm{M}_{\mathrm{v}} Distance from the top of the image to M
影像頂部至 M 點的距離
30 % M v / B 50 % 30 % M v / B 50 % 30% <= M_(v)//B <= 50%30 \% \leq \mathrm{M}_{\mathrm{v}} / \mathrm{B} \leq 50 \%
V Line through mouth centre (feature point 2.3) and M
通過嘴部中心(特徵點 2.3)與 M 點的直線
W Head width: Distance between the two imaginary lines parallel to the line V; each imaginary line is drawn between the upper and lower lobes of each ear (feature points 10.2/10.6 for the right and 10.1/10.5 for the left ear)
頭部寬度:與 V 線平行的兩條虛擬線之間的距離;每條虛擬線分別連接左右耳上下耳垂(右耳特徵點 10.2/10.6 與左耳特徵點 10.1/10.5)
50 % W / A 75 % 50 % W / A 75 % 50% <= W//A <= 75%50 \% \leq \mathrm{W} / \mathrm{A} \leq 75 \%
L Head length: Distance between the base of the chin (feature point 2.1) and the crown (feature point 11.4) measured on the imaginary line V , if these feature points are not exactly located at V, the vertical projection of them to V shall be used
頭部長度:測量下巴底部(特徵點 2.1)與頭頂(特徵點 11.4)之間沿虛擬線 V 的距離,若這些特徵點未精確位於 V 線上,則應使用其垂直投影至 V 線的距離
60 % L / B 90 % 60 % L / B 90 % 60% <= L//B <= 90%60 \% \leq \mathrm{L} / \mathrm{B} \leq 90 \%
Term Description Requirement A Image width B Image height 74% <= A//B <= 80% H Line through the centres of the left (feature point 12.1) and the right eye (feature point 12.2) M Face centre (midpoint of H) M_(h) Distance from the left side of the image to M 45% <= M_(h)//A <= 55% M_(v) Distance from the top of the image to M 30% <= M_(v)//B <= 50% V Line through mouth centre (feature point 2.3) and M W Head width: Distance between the two imaginary lines parallel to the line V; each imaginary line is drawn between the upper and lower lobes of each ear (feature points 10.2/10.6 for the right and 10.1/10.5 for the left ear) 50% <= W//A <= 75% L Head length: Distance between the base of the chin (feature point 2.1) and the crown (feature point 11.4) measured on the imaginary line V , if these feature points are not exactly located at V, the vertical projection of them to V shall be used 60% <= L//B <= 90%| Term | Description | Requirement | | :--- | :--- | :--- | | A | Image width | | | B | Image height | $74 \% \leq \mathrm{A} / \mathrm{B} \leq 80 \%$ | | H | Line through the centres of the left (feature point 12.1) and the right eye (feature point 12.2) | | | M | Face centre (midpoint of H) | | | $\mathrm{M}_{\mathrm{h}}$ | Distance from the left side of the image to M | $45 \% \leq \mathrm{M}_{\mathrm{h}} / \mathrm{A} \leq 55 \%$ | | $\mathrm{M}_{\mathrm{v}}$ | Distance from the top of the image to M | $30 \% \leq \mathrm{M}_{\mathrm{v}} / \mathrm{B} \leq 50 \%$ | | V | Line through mouth centre (feature point 2.3) and M | | | W | Head width: Distance between the two imaginary lines parallel to the line V; each imaginary line is drawn between the upper and lower lobes of each ear (feature points 10.2/10.6 for the right and 10.1/10.5 for the left ear) | $50 \% \leq \mathrm{W} / \mathrm{A} \leq 75 \%$ | | L | Head length: Distance between the base of the chin (feature point 2.1) and the crown (feature point 11.4) measured on the imaginary line V , if these feature points are not exactly located at V, the vertical projection of them to V shall be used | $60 \% \leq \mathrm{L} / \mathrm{B} \leq 90 \%$ |
NOTE Both ISO/IEC 19794-5:2005 (edition 1) and ISO/IEC 19794-5:2011 (edition 2) comply with Table D. 8 geometric requirements for Full Frontal image type and Token Face image type.
請注意,ISO/IEC 19794-5:2005(第一版)與 ISO/IEC 19794-5:2011(第二版)皆符合表 D.8 中針對全正面影像類型與代用臉部影像類型所訂定之幾何要求。
Figure D. 32 shows a typical example of a face portrait. In Figure D. 32 a) the intersection of the two rectangles marks the region where the centre point M shall be located. In Figure D. 32 b) the smaller rectangle in the face portrait shall be completely included in the head; the head itself shall be completely included in the larger rectangle. Note that the locations of these two rectangles do not depend on the location of M, the rectangles can be moved freely and independently from each other as long as they stay parallel to the borders of the image. Figure D. 33 gives samples where the faces do not fit into the larger rectangle or do not fill the smaller rectangle.
圖 D. 32 展示了一個典型的人像範例。在圖 D. 32 a) 中,兩個矩形的交點標示了中心點 M 應位於的區域。在圖 D. 32 b) 中,人像中較小的矩形應完全包含於頭部範圍內;而頭部本身則應完全包含於較大的矩形內。請注意,這兩個矩形的位置並不取決於 M 點的位置,只要保持與圖像邊緣平行,矩形可自由且獨立地移動。圖 D. 33 提供了臉部未填滿較大矩形或未完全涵蓋較小矩形的範例。

Figure D. 32 - Sample face portraits with the respective minimal and maximal head dimensions
圖 D. 32 - 附有最小與最大頭部尺寸的範例人像照片

Figure D. 33 - Sample face portraits not complying with minimal and maximal head dimensions
圖 D. 33 - 不符合頭部最小與最大尺寸範例的人像照片

D.1.4.5 Children  D.1.4.5 兒童

D.1.4.5.1 General  D.1.4.5.1 概述

This subclause specifies additional guidance for capturing face portraits of children. Care should be taken to capture such images according to the specifications; however, sometimes this is not possible or would cause huge discomfort. Therefore, some requirements may be relaxed for children as specified below. See Figure D.34, Figure D. 35 and Figure D. 36 for sample images.
本小節針對兒童臉部肖像的採集提供額外指引。應盡可能依照規範採集此類影像,但有時可能無法達成或會造成極大不適。因此,針對兒童可放寬部分要求,詳見下列說明。範例影像請參閱圖 D.34、圖 D.35 及圖 D.36。

D.1.4.5.2 Children below one year
D.1.4.5.2 一歲以下兒童

Deviating from the specifications in D.1.4, babies under one year should be in an upright position, but it is acceptable to capture the face portrait with the baby lying on a white or plain light-coloured blanket which conforms to the requirements in D.1.4.2.5 and D.1.4.2.9. Alternatively, the baby may be placed in a baby seat as long as the background behind the head of the baby conforms to the requirements above and no portions of the baby seat are visible in the face portrait.
與 D.1.4 規範不同,未滿一歲嬰兒應採直立姿勢,但若讓嬰兒平躺於符合 D.1.4.2.5 與 D.1.4.2.9 要求的白色或素色淺色毯子上拍攝臉部肖像亦可接受。亦可將嬰兒置於嬰兒座椅中,只要其頭部後方背景符合上述要求,且座椅任何部分未出現在臉部肖像中即可。
Deviating from the specifications in D.1.4.3.3, it is not necessary that babies under one year have their eyes open.
與 D.1.4.3.3 規範不同,未滿一歲嬰兒無需保持雙眼睜開。
Hands, arms and other body parts of an assisting person used to support the positioning of the subject, e.g., parents supporting their child, shall not be visible in the image. Shadows of these assistant parts shall not be visible on the face portrait or in the background.
協助者用於支撐主體定位的手部、手臂或其他身體部位(例如父母扶持孩童時)不得出現於影像中。這些輔助部位的陰影亦不得顯現於臉部肖像或背景上。

D.1.4.5.3 Children below six years
D.1.4.5.3 六歲以下兒童

Deviating from the specifications in D.1.4.3.1, children aged six and under shall face the camera within an angle of ± 15 ± 15 +-15^(@)\pm 15^{\circ} in pitch, yaw, and roll. Deviating from the specifications in D.1.4.3.2, children aged six and under do not need to have a neutral expression. For infants under the age of six, images are acceptable as long as the infant is awake, has his or her eyes open, there are no other people or objects in the photo and the background is uniform and face portrait meets the colour requirements in D.1.4.2.9.
與 D.1.4.3.1 規範不同,六歲及以下兒童面對鏡頭時,其俯仰角、偏航角和翻滾角應在 ± 15 ± 15 +-15^(@)\pm 15^{\circ} 範圍內。與 D.1.4.3.2 規範不同,六歲及以下兒童無需保持中性表情。對於六歲以下嬰幼兒,只要嬰兒處於清醒狀態、雙眼睜開、照片中沒有其他人或物體、背景統一且臉部肖像符合 D.1.4.2.9 的色彩要求,其影像即為可接受。

D.1.4.5.4 Children below eleven years
D.1.4.5.4 十一歲以下兒童

Deviating from the specifications in D.1.4.4 for children of up to eleven years, L / B L / B L//B\mathrm{L} / \mathrm{B} shall be between 50 % 50 % 50%50 \% and 90 % 90 % 90%90 \%. Furthermore, M y / B M y / B M_(y)//B\mathrm{M}_{\mathrm{y}} / \mathrm{B} shall be between 30 % 30 % 30%30 \% and 60 % 60 % 60%60 \%.
與 D.1.4.4 節中針對十一歲以下兒童的規範不同, L / B L / B L//B\mathrm{L} / \mathrm{B} 應介於 50 % 50 % 50%50 \% 90 % 90 % 90%90 \% 之間。此外, M y / B M y / B M_(y)//B\mathrm{M}_{\mathrm{y}} / \mathrm{B} 應介於 30 % 30 % 30%30 \% 60 % 60 % 60%60 \% 之間。

Figure D. 34 - Compliant child face portraits
圖 D.34 - 符合規範的兒童臉部肖像

Figure D. 35 - Examples of additional objects visible in the image
圖 D.35 - 影像中可見之額外物體範例


a) Not looking into the camera
a) 未直視鏡頭


b) Eyes closed  b) 閉眼

c) Cap  c) 戴帽子

d) No neutral expression
d) 無中性表情
Figure D. 36 - Examples of non-compliant poses and expressions
圖 D.36 - 不符合規範的姿勢與表情範例

D.1.5 Image storage in the chip
D.1.5 晶片中的影像儲存

D.1.5.1 General  D.1.5.1 概述

This subclause specifies properties of the face portrait to be electronically stored in a MRTD. Figure D. 37 shows the content of D.1.5 in the MRTD production process chain.
本小節規範將以電子方式儲存於機讀旅行證件(MRTD)中的人臉肖像特性。圖 D.37 顯示 MRTD 製程鏈中 D.1.5 章節的內容。

Figure D. 37 - Content of D.1.5 in the MRTD production process chain (boxed in red)
圖 D.37 - MRTD 生產流程鏈中 D.1.5 的內容(以紅框標示)

The requirements and recommendations given in this subclause shall ensure that the photographic requirements given in D.1.4.2 are retained in the face portrait that is finally stored in a MRTD, with the exception of the pixel count. The minimal requirements specified in this subclause apply to the image finally stored in Data Group 2 as defined in ICAO Doc 9303.
本小節所述之要求與建議,旨在確保 D.1.4.2 所規範之攝影要求(像素計數除外)能完整保留於最終儲存於機讀旅行證件(MRTD)之人像照片中。本小節所訂之最低要求適用於最終儲存於國際民航組織(ICAO) 9303 號文件所定義之第 2 資料群組的影像。

ISO/IEC 39794-5:2019(E)

A submitted face portrait shall have been captured within the last six months before application. Face portraits with a capture time dating back more than three months should not be accepted. Issuers should consider the use of the metadata encoded with the digital image to assure that the photograph is recent. See Table D.9.
提交之人像照片應於申請日前六個月內拍攝。拍攝時間超過三個月之照片不應被接受。發證機關應考慮使用數位影像內嵌之中繼資料,以確保照片為近期拍攝。詳見表 D.9。
Table D. 9 - Capture time requirements and recommendations
表 D.9 - 拍攝時間要求與建議
  準則:擷取時間
Criterion:
Capture time
Criterion: Capture time| Criterion: | | :--- | | Capture time |
Requirement  要求 At most six months before application.
申請前最多六個月內
Best practice  最佳實務 At most three months before application.
申請前最多三個月內。
"Criterion: Capture time" Requirement At most six months before application. Best practice At most three months before application.| Criterion: <br> Capture time | Requirement | At most six months before application. | | :--- | :--- | :--- | | | Best practice | At most three months before application. |

D.1.5.2 Data format  D.1.5.2 資料格式

Face portraits of a subject to be stored in the MRTD chip
儲存於 MRTD 晶片中的受測者臉部肖像
  • Shall be stored in one of the following formats: JPEG, JPEG2000,
    應以下列其中一種格式儲存:JPEG、JPEG2000、
  • Should have a minimum IED of 90 pixels, preferably of 120 pixels (see Table D.10),
    最小 IED 應為 90 像素,建議達到 120 像素(參見表 D.10)
  • Shall be in colour.
    應採用彩色格式
These specifications provide adequate spatial sampling rate for use on the MRTD while maintaining an adequate quality for human and machine face recognition purposes.
這些規格能為 MRTD 提供足夠的空間採樣率,同時維持人臉與機器辨識所需的適當品質。
Table D. 10 - IED requirements and recommendations for the chip image
表 D. 10 - 晶片影像的 IED 需求與建議
  標準:IED
Criterion:
IED
Criterion: IED| Criterion: | | :--- | | IED |
Requirement  需求 IED 90 IED 90 IED >= 90\mathrm{IED} \geq 90 pixels
Best Practice  最佳實務 IED 120 IED 120 IED >= 120\mathrm{IED} \geq 120 pixels
"Criterion: IED" Requirement IED >= 90 pixels Best Practice IED >= 120 pixels| Criterion: <br> IED | Requirement | $\mathrm{IED} \geq 90$ pixels | | :--- | :--- | :--- | | | Best Practice | $\mathrm{IED} \geq 120$ pixels |
The pixel count specified in D.1.4.2.4 applies to the originally captured face portrait and not to the images to be stored in a passport. The processing steps between capturing and passport production might lead to information losses. It is therefore recommended that a higher resolution version of the image is stored in the issuer’s repository.
D.1.4.2.4 中指定的像素數適用於原始拍攝的人臉肖像,而非護照中儲存的圖像。從拍攝到護照製作的處理步驟可能會導致資訊損失。因此建議在發證機關的程式碼儲存庫中儲存更高解析度的圖像版本。
One of the three possible encodings shall be used:
應採用以下三種編碼方式之一:
  • The JPEG sequential baseline (ISO/IEC 10918-1) mode of operation and encoded in the JFIF file format (the JPEG file format).
    JPEG 循序基線(ISO/IEC 10918-1)操作模式並以 JFIF 檔案格式(JPEG 檔案格式)編碼。
  • The JPEG-2000 Part-1 code stream format (ISO/IEC 15444-1), lossy, and encoded in the JP2 file format (the JPEG2000 file format).
    JPEG-2000 第 1 部分編碼串流格式(ISO/IEC 15444-1),採用有損壓縮,並以 JP2 檔案格式(JPEG2000 檔案格式)編碼。
  • The JPEG-2000 Part-1 code stream format (ISO/IEC 15444-1), lossless, and encoded in the JP2 file format (the JPEG2000 file format).
    JPEG-2000 第 1 部分編碼串流格式(ISO/IEC 15444-1),採用無損壓縮,並以 JP2 檔案格式(JPEG2000 檔案格式)編碼。
The coordinate origin shall be at the upper left given by coordinate ( 0 , 0 ) ( 0 , 0 ) (0,0)(0,0) with positive entries from left to right (first dimension) and top to bottom (second dimension).
座標原點應位於左上角,由座標 ( 0 , 0 ) ( 0 , 0 ) (0,0)(0,0) 給出,正方向從左至右(第一維度)和從上至下(第二維度)。

D.1.5.3 Property mask  D.1.5.3 屬性遮罩

The positions of the Properties element in the data structure described in this document should be set for:
本文件所述資料結構中 Properties 元素的位置應設定為:
  • (medical) dark glasses;  (醫療用)墨鏡;
  • head coverings;  頭部覆蓋物;
  • left and right eye patches;
    左右眼罩;
  • glasses;  眼鏡
  • biometric absence (conditions which could impact landmark detection).
    生物特徵缺失(可能影響地標偵測的狀況)。
Additionally, the Subject height element should be encoded.
此外,應編碼受測者的身高元素。

D.1.5.4 Post-acquisition processing
D.1.5.4 採集後處理

No other post processing than:
除以下處理外不得進行其他後處理:
  • in-plane rotation and/or;
    平面內旋轉及/或;
  • cropping and/or;  裁切及/或;
  • down sampling and/or;  降取樣和/或;
  • white balance adjustment and/or;
    白平衡調整和/或;
  • ICC colour management transformation and/or;
    ICC 色彩管理轉換和/或;
  • processing RAW images into the target encoding (once) and/or;
    將 RAW 影像處理為目標編碼(一次性)和/或;
  • compression as described in D.1.5.5,
    如 D.1.5.5 節所述之壓縮技術,

    shall be applied on the captured image to create the face portrait. Any processing shall maintain the requirements given in D.1.3 and D.1.4. Any processing shall render skin and hair colours realistically enough to allow straightforward human identification of the MRTD holder. The face images shall not be modified locally, e.g., for removal of scars, pimples or other skin impurities or to modify the shape or location of the nose, the eyes, the eyebrows or any other face landmarks. The image shall not be modified locally by editing clothes (e.g., a turban).
    應應用於擷取之影像以建立臉部肖像。任何處理程序皆須符合 D.1.3 與 D.1.4 節之規範要求。所有處理程序均須真實呈現膚色與髮色,以確保能直接辨識 MRTD 持有者之身分。臉部影像不得進行局部修改,例如消除疤痕、青春痘或其他皮膚瑕疵,或調整鼻子、眼睛、眉毛或其他臉部特徵之形狀與位置。亦不得透過編輯衣物(如頭巾)等方式對影像進行局部修改。
In particular, any image processing targeting at background removal shall not be implemented. If necessary, the MRTD issuing authority may remove or alter the background in the printed image later in the MRTD production process.
特別禁止實施任何針對背景移除之影像處理程序。如有必要,MRTD 簽發機關可於後續製卡流程中,對印刷影像之背景進行移除或修改。

D.1.5.5 Compression  D.1.5.5 壓縮規範

Captured face portraits of a subject should not sacrifice image quality by overly compressing the image.
拍攝的主體臉部肖像不應因過度壓縮圖像而犧牲畫質。

For maximum effect in human and automated face recognition, the raw image or an image with limited compression should be retained. The JPEG compression ratio shall not exceed 15 : 1 15 : 1 15:115: 1.
為達到最佳的人臉與自動辨識效果,應保留原始圖像或僅經有限壓縮的圖像。JPEG 壓縮比率不得超過 15 : 1 15 : 1 15:115: 1
EXAMPLE In many cases, such images have a size of at least 12 kBytes for JPEG and JPEG2000 for storage in the chip of an electronic MRTD. The upper limit is defined by the available storage space available on the chip and reading time requirements.
範例 在許多情況下,此類圖像若需儲存於電子機讀旅行證件晶片中,JPEG 與 JPEG2000 格式的檔案大小至少需達 12 KB。上限則取決於晶片可用儲存空間及讀取時間要求。
The image used for the printing process on the MRTD and for storage in the chip will give better results if not compressed beyond a ratio creating visible artefacts on the image when viewed at 100 % 100 % 100%100 \% magnification - where a single pixel in an image file is displayed by a single pixel on a monitor or viewing device. This allows for electronic judging of whether an image is overly compressed.
用於機讀旅行證件印刷流程及晶片儲存的圖像,若壓縮比率未達在 100 % 100 % 100%100 \% 倍放大檢視時產生可見瑕疵的程度(即圖像檔案中單一像素對應顯示裝置上單一像素),將能獲得較佳效果。此標準可供電子化判斷圖像是否過度壓縮。
Lossy compressions can only be applied once per each of the following steps:
有損壓縮在下列每個步驟中僅能執行一次:
  • one initial compression by the camera itself,
    由相機本身進行的初始壓縮、
  • one compression done by the photographer or citizen, and
    由攝影師或民眾執行的壓縮,以及
  • one compression done by the issuer.
    由發證機關執行的壓縮。
JPEG 2000 enables compressing to a target file size. If using JPEG compression must be done iteratively to the target file size while reverting back to the original image rather than successive compressions.
JPEG 2000 可實現壓縮至目標檔案大小。若使用 JPEG 壓縮時,必須反覆調整至目標檔案大小,同時需還原至原始影像,而非進行連續壓縮。

ISO/IEC 39794-5:2019(E)

D. 2 General purpose face image
D.2 通用臉部影像

D.2.1 General  D.2.1 概述

This annex describes a profiled face image that meets minimal requirements to acquire an image for general face recognition usage.
本附錄描述符合基本要求的臉部影像設定檔,適用於一般臉部辨識用途之影像擷取。
One of the following encodings shall be used:
應使用下列其中一種編碼方式:
  • The JPEG sequential baseline (ISO/IEC 10918-1) mode of operation and encoded in the JFIF file format (the JPEG file format)
    JPEG 循序基線模式(ISO/IEC 10918-1)之操作方式,並以 JFIF 檔案格式(JPEG 檔案格式)進行編碼
  • The JPEG-2000 Part-1 code stream format (ISO/IEC 15444-1), lossy or lossless, and encoded in the JP2 file format (the JPEG2000 file format)
    JPEG-2000 第 1 部分碼流格式(ISO/IEC 15444-1),可採用有損或無損壓縮,並以 JP2 檔案格式(JPEG2000 檔案格式)進行編碼
  • The PNG specification (ISO/IEC 15948). PNG shall not be used in its interlaced mode and not for images that have been JPEG compressed before.
    PNG 規格(ISO/IEC 15948)。PNG 不得使用交錯模式,且不得用於先前經過 JPEG 壓縮的影像。
Landmarks should be determined on images before compression is applied. Landmarks should be included in the record format if they have been accurately determined, thereby providing the option that these parameters do not have to be re-determined when the image is processed for face recognition tasks. The landmarks should be determined by computer-automated detection mechanisms followed by human validation. It is recommended to encode the following landmarks: the middle point of the eyes (12.1 and 12.2), the base of the nose (9.4, 9.5, and 9.15) and the upper lip of the mouth (8.4, 8.1 and 8.3).
特徵點應在影像壓縮前進行標記。若特徵點已準確標定,則應將其納入記錄格式中,如此在進行臉部辨識任務時,便無需重新標定這些參數。特徵點應透過電腦自動偵測機制標定,並經人工驗證。建議編碼以下特徵點:眼睛中點(12.1 與 12.2)、鼻樑底部(9.4、9.5 與 9.15)以及上唇位置(8.4、8.1 與 8.3)。
The 2D Image representation block shall be present. The value of the 2D face image kind shall be General purpose.
必須包含 2D 影像表示區塊。2D 臉部影像種類的值應設為「通用用途」。

D.2.2 Image data compression requirements and recommendations
D.2.2 影像資料壓縮要求與建議

Best practice on compression without a region of interest is:
最佳無感興趣區域壓縮實務如下:

a) The compressed file size should not be smaller than 11 KB on average.
a) 壓縮後檔案大小平均不應小於 11 KB。

b) JPEG2000 should be preferred over JPEG.
b) 應優先選用 JPEG2000 而非 JPEG。
JPEG2000 can be used to implement region of interest (ROI) compression, as it is a technique specified in the JPEG2000 standard and well defined for JPEG2000 software libraries. JPEG2000 ROI encoding can be used to achieve smaller file sizes.
JPEG2000 可用於實現感興趣區域(ROI)壓縮技術,此為 JPEG2000 標準所規範之技術,並已於 JPEG2000 軟體函式庫中明確定義。採用 JPEG2000 ROI 編碼可達成更小的檔案尺寸。
The inner region of an image are the pixels having X X XX and Y Y YY coordinates with 1 , 5 w X 1 , 5 w 1 , 5 w X 1 , 5 w -1,5w <= X <= 1,5w-1,5 w \leq X \leq 1,5 w and 1 , 8 W Y 1 , 8 W 1 , 8 W Y 1 , 8 W -1,8W <= Y <= 1,8W-1,8 \mathrm{~W} \leq \mathrm{Y} \leq 1,8 \mathrm{~W} in the Cartesian coordinate system with the landmark Prn as origin, where W is the inter-eye distance. The outer region of an image consists of all pixels of the image that are not in the inner region of that image.
影像的內部區域是指在以地標 Prn 為原點的笛卡爾座標系中,具有 X X XX Y Y YY 座標且 1 , 5 w X 1 , 5 w 1 , 5 w X 1 , 5 w -1,5w <= X <= 1,5w-1,5 w \leq X \leq 1,5 w 1 , 8 W Y 1 , 8 W 1 , 8 W Y 1 , 8 W -1,8W <= Y <= 1,8W-1,8 \mathrm{~W} \leq \mathrm{Y} \leq 1,8 \mathrm{~W} 的像素,其中 W 為眼距。影像的外部區域則由該影像中不屬於內部區域的所有像素組成。
The inner region of a face image used for comparison can be compressed to a low ratio, while the outer region of the image is compressed to a higher ratio. The resulting image is smaller in size, but those parts of the image used for comparison retain high quality while the remainder of the image maintains their usefulness for visual inspection. A standard compliant JPEG2000 decoder with ROI support will decode an ROI image regardless of the location of ROI regions.
用於比對的臉部影像內部區域可壓縮至較低比率,而影像外部區域則壓縮至較高比率。如此產生的影像尺寸較小,但用於比對的影像部分仍保持高品質,其餘部分則維持可供視覺檢驗的實用性。支援 ROI 功能的標準相容 JPEG2000 解碼器將能解碼 ROI 影像,無論 ROI 區域位於何處。
The use of region of interest compression for situations where computer alignment is performed without human verification is not recommended. It is important to note that additional compression can be achieved by defining inner and outer regions that are based on the face area.
不建議在未經人工驗證而僅由電腦對齊的情況下使用興趣區域壓縮技術。需特別注意的是,透過定義基於臉部區域的內外區域可實現額外壓縮效果。
When derived from a 300 dpi image, an inner region can be defined as including the entire face from crown to chin and ear to ear. Best practice indicates that a compression ratio of 60 : 1 60 : 1 60:160: 1 using JPEG2000 preserves comparison performance. If a 50 : 1 50 : 1 50:150: 1 ratio is used for the inner region, 200 : 1 200 : 1 200:1200: 1 can be used on the outer region with an acceptable level of degradation for visual inspection purposes. For a colour,
當從 300 dpi 的影像中擷取時,可將內側區域定義為包含從頭頂到下巴及耳到耳的整個臉部。最佳實務顯示,使用 JPEG2000 以 60 : 1 60 : 1 60:160: 1 的壓縮比能保持比對效能。若內側區域採用 50 : 1 50 : 1 50:150: 1 的壓縮比,則外側區域可使用 200 : 1 200 : 1 200:1200: 1 的壓縮比,此設定在視覺檢查用途上仍可接受一定程度的畫質降級。至於彩色影像,
300 dpi, 35 mm × 45 mm 35 mm × 45 mm 35mmxx45mm35 \mathrm{~mm} \times 45 \mathrm{~mm} JPEG2000 image ( 413 pixels × 531 × 531 xx531\times 531 pixels, 658 KB uncompressed), with a 240 pixels × 320 × 320 xx320\times 320 pixels ( 230 , 4 KB 230 , 4 KB 230,4KB230,4 \mathrm{~KB} ) inner region, the sizes after compression are:
300 dpi、 35 mm × 45 mm 35 mm × 45 mm 35mmxx45mm35 \mathrm{~mm} \times 45 \mathrm{~mm} JPEG2000 格式影像(未壓縮時為 413 像素 × 531 × 531 xx531\times 531 像素、658 KB),其 240 像素 × 320 × 320 xx320\times 320 像素( 230 , 4 KB 230 , 4 KB 230,4KB230,4 \mathrm{~KB} )的內側區域經壓縮後的大小為:
  • 200:1 outer region: ( 658 KB 230 , 4 KB ) / 200 = 2 , 14 KB ( 658 KB 230 , 4 KB ) / 200 = 2 , 14 KB (658KB-230,4KB)//200=2,14KB(658 \mathrm{~KB}-230,4 \mathrm{~KB}) / 200=2,14 \mathrm{~KB};
    200:1 外側區域: ( 658 KB 230 , 4 KB ) / 200 = 2 , 14 KB ( 658 KB 230 , 4 KB ) / 200 = 2 , 14 KB (658KB-230,4KB)//200=2,14KB(658 \mathrm{~KB}-230,4 \mathrm{~KB}) / 200=2,14 \mathrm{~KB}
  • 50:1 inner region: ( 230 , 4 KB ) / 50 = 4 , 61 KB ( 230 , 4 KB ) / 50 = 4 , 61 KB (230,4KB)//50=4,61KB(230,4 \mathrm{~KB}) / 50=4,61 \mathrm{~KB};
    50:1 內側區域: ( 230 , 4 KB ) / 50 = 4 , 61 KB ( 230 , 4 KB ) / 50 = 4 , 61 KB (230,4KB)//50=4,61KB(230,4 \mathrm{~KB}) / 50=4,61 \mathrm{~KB}
  • total file size: 2 , 14 KB + 4 , 61 KB = 6 , 75 KB 2 , 14 KB + 4 , 61 KB = 6 , 75 KB 2,14KB+4,61KB=6,75KB2,14 \mathrm{~KB}+4,61 \mathrm{~KB}=6,75 \mathrm{~KB}. File size reduction: 40 % 40 % ∼40%\sim 40 \%.
    總檔案大小: 2 , 14 KB + 4 , 61 KB = 6 , 75 KB 2 , 14 KB + 4 , 61 KB = 6 , 75 KB 2,14KB+4,61KB=6,75KB2,14 \mathrm{~KB}+4,61 \mathrm{~KB}=6,75 \mathrm{~KB} 。檔案大小縮減: 40 % 40 % ∼40%\sim 40 \%

D.2.3 Scene requirements and recommendations
D.2.3 場景需求與建議

D.2.3.1 Pose  D.2.3.1 姿勢

Pose is known to strongly affect performance of automated face recognition systems. Thus, the frontal pose shall be used. Rotation of the head shall be less than ± 5 ± 5 +-5^(@)\pm 5^{\circ} from frontal in pitch and yaw. Pose variations that lead to an in-plane rotation of the head can be more easily compensated by automated face recognition systems. Therefore, the rotation of the head shall be less than ± 8 ± 8 +-8^(@)\pm 8^{\circ} from frontal in roll.
已知姿勢會強烈影響自動人臉辨識系統的效能。因此,應使用正面姿勢。頭部在俯仰和偏轉方向的旋轉應小於 ± 5 ± 5 +-5^(@)\pm 5^{\circ} 。導致頭部平面內旋轉的姿勢變化較容易被自動人臉辨識系統補償。因此,頭部在滾轉方向的旋轉應小於 ± 8 ± 8 +-8^(@)\pm 8^{\circ}
The best practice is that the rotation of the head should be less than ± 5 ± 5 +-5^(@)\pm 5^{\circ} from frontal in every direction, roll, pitch and yaw. The optimum height of the camera is at the subject’s eye-level. Height adjustment can be done by either using a height-adjustable stool or adjusting the tripod’s height. The subject should be instructed to look directly at the camera and to keep his or her head erect and shoulders square to the camera.
最佳實務是頭部在各方向的旋轉(包括滾轉、俯仰和偏航)應小於 ± 5 ± 5 +-5^(@)\pm 5^{\circ} ,且以正面為基準。攝影機的最佳高度應與被攝者的眼睛齊平。可透過使用可調高度的凳子或調整三腳架高度來進行高度調整。應指示被攝者直視鏡頭,保持頭部直立且雙肩正對鏡頭。

D.2.3.2 Expression  D.2.3.2 表情

Expression is known to strongly affect the performance of automated face recognition systems. It is recommended that the Expression element is present.
已知表情會強烈影響自動臉部辨識系統的效能。建議應包含表情要素。
The expression should be neutral (non-smiling) with both eyes open normally (i.e. not wide-open), and mouth closed (mouth is closed if the distance between landmark 2.2 and 2.3 is less than 50 % 50 % 50%50 \% of the distance of landmark 2.3 and 8.2). Every effort should be made to have the supplied images to comply with this specification. A smile with closed or open jaw, raised eyebrows, eyes looking away from the camera, squinting or frowning are not recommended.
表情應保持中性(不微笑),雙眼自然睜開(非瞪大),且嘴巴閉合(若地標 2.2 與 2.3 之間的距離小於地標 2.3 與 8.2 之間距離的 50 % 50 % 50%50 \% ,則視為嘴巴閉合)。應盡一切努力使提供的圖像符合此規範。不建議出現閉合或張開下巴的微笑、挑眉、視線偏離鏡頭、瞇眼或皺眉等表情。

D.2.3.3 Shoulders  D.2.3.3 肩膀姿勢

Shoulders shall be square on to the camera. Portrait style photographs where the subject is looking over one shoulder are not acceptable.
肩膀應正對相機。拍攝對象側身回眸的肖像風格照片不予接受。

D.2.3.4 Backgrounds  D.2.3.4 背景規範

The specification of a certain background is not normative for the creation of general purpose face images. A consideration of the background is important for computer-based face recognition because the first step in the computer face recognition process is the segmentation of the face from the background.
特定背景的規範對於創建通用人臉影像並非強制要求。但背景考量對電腦人臉辨識至關重要,因為電腦人臉辨識流程的第一步就是將人臉從背景中分割出來。

D.2.3.5 Subject and scene lighting
D.2.3.5 主體與場景照明

Lighting shall be equally distributed on the face. There shall be no significant direction of the light from the point of view of the photographer. The ratio between the median intensity on a square region centred around Landmarks 5.3 and 5.4 with side length 20 % 20 % 20%20 \% of the inter-eye distance shall be between 0,5 and 2,0 .
光線應均勻分佈於臉部。從攝影師的角度來看,不應有明顯的光源方向性。以地標 5.3 和 5.4 為中心、邊長為雙眼間距 20 % 20 % 20%20 \% 的正方形區域內,其中位光強比值應介於 0.5 至 2.0 之間。

D.2.3.6 Hot spots, specular reflections, and other lighting artefacts
D.2.3.6 光斑、鏡面反射及其他照明瑕疵

Hot spots (i. e., bright regions that results from light shining directly on the face) shall be absent. These artefacts typically occur when a high intensity focused light source is used for illumination. Diffused lighting, multiple balanced sources or other lighting methods shall be used.
不得出現光斑(即光線直接照射臉部形成的亮區)。此類瑕疵通常發生在使用高強度聚焦光源進行照明時。應使用漫射照明、多重平衡光源或其他照明方法。
There shall be no lighting artefacts or flash reflections on glasses. Lighting artefacts covering any region of the eyes shall not be present. This applies to any region in the polygon between landmarks 3.8, 3.2 , 3.12 3.2 , 3.12 3.2,3.123.2,3.12 and 3.4 for the right eye and between landmarks 3.11 , 3.1 , 3.7 3.11 , 3.1 , 3.7 3.11,3.1,3.73.11,3.1,3.7 and 3.3 for the left eye.
眼鏡上不得有照明偽影或閃光反射。眼睛任何區域皆不得出現照明偽影。此規定適用於右眼 3.8、 3.2 , 3.12 3.2 , 3.12 3.2,3.123.2,3.12 與 3.4 標記點之間的多邊形區域,以及左眼 3.11 , 3.1 , 3.7 3.11 , 3.1 , 3.7 3.11,3.1,3.73.11,3.1,3.7 與 3.3 標記點之間的所有區域。

D.2.3.7 Eye visibility and eye glasses
D.2.3.7 眼睛可見度與眼鏡

The eye pupils and irises shall be visible. There should be no shadows in the eye-sockets due to the brow. In cases where pupils or irises are not visible the pupil or iris not visible element in the Properties element shall be true.
瞳孔和虹膜必須清晰可見。眉毛不應在眼窩處造成陰影。若瞳孔或虹膜不可見,則屬性元素中的「pupil or iris not visible」應設為 true。
Eye patches shall not be worn. An exception applies when the subject asserts a need to retain the patch (e.g., a medical reason); in these cases, the left eye patch or the right eye patch element in the Properties element shall be true.
不得佩戴眼罩。但若當事人聲明有保留眼罩的需求(例如醫療原因)則為例外;此情況下,屬性元素中的「left eye patch」或「right eye patch」應設為 true。
Hair should not cover any part of the eyes. It is recommended that hair should not cover landmarks 3.2, 3.8, 3.12 for the right eye and Landmarks 3.1, 3.7 and 3.11 for the left eye, as well as region above these points that measures 5 % of inter-eye distance.
頭髮不應遮蓋眼睛的任何部分。建議頭髮不應遮蓋右眼的 3.2、3.8、3.12 特徵點與左眼的 3.1、3.7、3.11 特徵點,以及這些點上方相當於眼距 5%的區域。
If the subject normally wears glasses, they may wear glasses if permitted for the intended application when their photograph is taken.
若受測者平時配戴眼鏡,在拍攝照片時若應用允許,可佩戴眼鏡。
Glasses should be clear and transparent. This requirement is intended to exclude dark or otherwise opaque glasses. Tinted glasses or sunglasses shall not be worn. An exception applies when the subject asserts a medical reason to retain tinted glasses; in these cases, the dark glasses element in the Properties element shall be true.
眼鏡應保持清晰透明。此項要求旨在排除深色或其他不透明的鏡片。不得佩戴有色眼鏡或太陽眼鏡。例外情況是當受測者聲明因醫療原因需保留有色眼鏡;在此情況下,屬性元素中的深色眼鏡項目應設為真值。
If glasses are worn that tint automatically under illumination, they should be photographed without tint by tuning the direct illumination or background lighting. In cases where the tint cannot be reduced, the glasses shall be removed unless the subject asserts a medical reason to retain the glasses. In cases where tinted glasses are worn, the specification of dark glasses in the Properties element is recommended.
若佩戴的眼鏡在光照下會自動變色,應透過調整直接照明或背景光源使其呈現未變色狀態進行拍攝。若無法減輕變色程度,除非受測者聲明因醫療原因需保留眼鏡,否則應取下眼鏡。當佩戴有色眼鏡時,建議在屬性元素中註明深色眼鏡規格。
The frames of glasses shall not obscure the eyes. The frames shall not be thicker than 5 % 5 % 5%5 \% of the intereye distance. Rims of glasses are covering part of the eye if any part of rims covers any part of the area enclosed by landmarks 3.2 , 3.4 , 3.8 3.2 , 3.4 , 3.8 3.2,3.4,3.83.2,3.4,3.8 and 3.12 for the right eye and landmarks 3.1 , 3.3 , 3.7 3.1 , 3.3 , 3.7 3.1,3.3,3.73.1,3.3,3.7 and 3.11 for the left eye, as well as region around these points that measures 5 % 5 % 5%5 \% of inter-eye distance. If rims of glasses are not visible or are completely transparent, it is assumed that they do not cover any part of the eye.
眼鏡框不得遮擋眼睛。鏡框厚度不得超過雙眼間距的 5 % 5 % 5%5 \% 。若鏡框任何部分遮蓋右眼標記點 3.2 , 3.4 , 3.8 3.2 , 3.4 , 3.8 3.2,3.4,3.83.2,3.4,3.8 與 3.12 所圍區域,或左眼標記點 3.1 , 3.3 , 3.7 3.1 , 3.3 , 3.7 3.1,3.3,3.73.1,3.3,3.7 與 3.11 所圍區域,以及這些點周圍相當於雙眼間距 5 % 5 % 5%5 \% 的範圍,即視為鏡框遮蓋眼部。若鏡框不可見或完全透明,則假定其未遮蓋眼部任何部分。
Lighting artefacts can typically be avoided by increasing the angle between the lighting, subject and camera to 45 45 45^(@)45^{\circ} or more.
光線造成的偽影通常可以透過增加光源、主體與相機之間的角度至 45 45 45^(@)45^{\circ} 或更大來避免。

D.2.3.8 Head coverings  D.2.3.8 頭部覆蓋物

In cases where head coverings are present the head coverings element in the Properties element shall be true.
若存在頭部覆蓋物,則屬性元素中的頭部覆蓋物元素應設為真值。
Head coverings and shadows should be absent. An exception applies to cases in which a subject cannot remove a headdress, veil or scarf (e.g, for religious reasons). In such cases the capture process should minimize shadows and obscuration of the face features in the face region. This might involve adjustment of the head coverings.
頭部覆蓋物與陰影應避免出現。例外情況適用於受測者因故無法移除頭飾、面紗或頭巾(例如宗教因素)的場合。在此類情況下,擷取程序應盡量減少臉部區域的特徵陰影與遮蔽。這可能涉及對頭部覆蓋物的調整。

D.2.4 Photographic requirements and recommendations
D.2.4 攝影要求與建議

D.2.4.1 Purpose  D.2.4.1 目的

Rather than impose a particular hardware and lighting capture system, this subclause specifies the desired output image properties. The requirements and recommendations apply to film as well as to digital photography.
本小節並非強制規定特定的硬體與光源擷取系統,而是明訂所需的輸出影像特性。這些要求與建議同時適用於底片與數位攝影。
This subclause describes the minimum relative dimensions of the full image with respect to the face. The requirements can be met by images taken in both face portrait and landscape mode, and Figure 9 shows a face portrait image and head outline to display lines H and V and dimensions A , B , W A , B , W A,B,W\mathrm{A}, \mathrm{B}, \mathrm{W}, and L which are referenced in the subclauses below. In addition to the requirements in D .2 .4 .2 D .2 .4 .2 _ D.2.4.2_\underline{\mathrm{D} .2 .4 .2} through D.2.4.14, the face shall be entirely visible from crown to chin and ear to ear in the image.
本小節描述全臉影像相對於臉部的最小相對尺寸要求。無論是以人像直式或橫式拍攝的影像皆可符合規範,圖 9 展示了一張直式人像照片與頭部輪廓,標示出 H 線、V 線以及後續小節將引用的尺寸 A , B , W A , B , W A,B,W\mathrm{A}, \mathrm{B}, \mathrm{W} 與 L。除須符合 D .2 .4 .2 D .2 .4 .2 _ D.2.4.2_\underline{\mathrm{D} .2 .4 .2} 至 D.2.4.14 條款之要求外,影像中必須完整呈現從髮際線至下巴、以及兩耳之間的臉部範圍。
NOTE For digital images the requirements related to the minimum inter-eye distance impose further requirements on the minimum head size.
注意事項:對於數位影像而言,與最小眼距相關的要求會進一步對頭部最小尺寸施加額外限制。

D.2.4.2 Contrast and saturation
D.2.4.2 對比度與飽和度

For each patch of skin on the capture subject’s face, the gradations in textures shall be clearly visible, i.e., being of reasonable contrast. In this sense, there will be no saturation (over or under exposure) on the face.
在拍攝對象臉部的每一塊皮膚區域,紋理的漸變應清晰可見,即具有合理的對比度。在此意義上,臉部不會出現飽和(過度曝光或曝光不足)的情況。
The colour saturation of a 24 -bit colour image should be such that after conversion to greyscale, there are 7 bits of intensity variation in the face region of the image.
24 位元彩色影像的色彩飽和度應達到以下標準:當轉換為灰階影像時,臉部區域的亮度變化需具備 7 位元的強度變化範圍。

D.2.4.3 Focus and depth of field
D.2.4.3 對焦與景深

The subject’s captured image shall always be in focus from nose to ears and chin to crown. Although this may result in the background behind the subject being out of focus, this is not a problem.
拍攝對象的影像必須確保從鼻樑到耳際、下巴到頭頂都保持清晰對焦。雖然這可能導致主體後方的背景失焦,但此情況並不構成問題。
All images shall have sufficient depth of focus to maintain visibility of all of the subject’s face features greater than one milimetre in size (at the face) at time of capture. This is considered accomplished, if, e.g., the individual millimetre markings of rulers placed on the subject’s nose and ear facing the camera can be seen simultaneously in a captured test image.
所有影像在拍攝時應具有足夠的景深,以確保主體面部所有大於一毫米的特徵(在面部位置)都能清晰可見。若能在測試影像中同時辨識出置於受測者鼻子和耳朵(面向鏡頭側)的尺規上個別毫米刻度,即可視為符合此要求。
In a typical photographic situation, for optimum quality of the captured face, the f-stop of the lens should be set at two (or more) f-stops below the maximum aperture opening when possible to obtain enough depth of field.
在典型攝影條件下,為獲得最佳面部影像品質,鏡頭光圈值應盡可能設定在最大光圈以下兩級(或更多)以確保足夠的景深範圍。
If the camera lacks auto focus all subject positions will need to be maintained in a defined area for all image captures.
若相機不具備自動對焦功能,則所有拍攝對象位置都必須維持在預先定義的區域內進行影像擷取。

D.2.4.4 Greyscale density
D.2.4.4 灰階密度

The dynamic range of the image should have at least 7 bits of intensity variation (span a range of at least 128 unique values) in the face region of the image. The face region is defined as the region from crown to chin and from the left ear to the right ear. This recommendation may require camera, video digitizer or scanner settings to be changed on an individual basis when the skin tone is excessively lighter or darker than the average (present) population.
影像的動態範圍在臉部區域應至少有 7 位元的強度變化(涵蓋至少 128 個獨特值範圍)。臉部區域定義為從頭頂到下巴,以及從左耳到右耳的範圍。當膚色比平均(當前)族群過淺或過深時,此建議可能需要根據個別情況調整相機、視訊數位化器或掃描器的設定。

D.2.4.5 Unnatural colour
D.2.4.5 非自然色彩

Unnaturally coloured lighting (yellow, red, etc.) is not allowed. Care shall be taken to correct the white balance of image capture devices. The lighting shall produce a face image with natural looking flesh tones when viewed in typical examination environments. Images showing a red eye effect, i.e., the common appearance of red eyes on photographs taken with a photographic flash when the flash is too close to the lens, are not acceptable. The iris and the iris colour shall be visible.
不允許使用非自然色彩的照明(黃色、紅色等)。應注意校正影像擷取裝置的白平衡。在典型檢視環境下,照明應產生具有自然膚色效果的臉部影像。出現紅眼效應的影像(即攝影閃光燈過於接近鏡頭時,照片中常見的紅眼現象)是不可接受的。虹膜及虹膜色彩必須清晰可見。
Greyscale photographs should be produced from common incandescent light sources. Colour photographs should use colour-balancing techniques such as using high colour-temperature flash with standard film or tungsten-balanced film with incandescent lighting.
灰階照片應使用普通白熾燈光源拍攝。彩色照片則應採用色彩平衡技術,例如使用高色溫閃光燈搭配標準底片,或是使用鎢絲燈平衡底片配合白熾燈照明。

D.2.4.6 Colour or greyscale enhancement
D.2.4.6 色彩或灰階增強

A process that overexposes or under-develops a colour or greyscale image for purposes of beauty enhancement or artistic pleasure is not allowed. The full spectrum shall be represented on the face
禁止為了美化或藝術效果而過度曝光或顯影不足的彩色或灰階影像處理。臉部必須呈現完整的光譜範圍。

ISO/IEC 39794-5:2019(E)

image where appropriate. Teeth and whites of eyes shall be clearly light or white (when appropriate) and dark hair or features (when appropriate) shall be clearly dark.
在適當的情況下,牙齒和眼白應明顯呈現淺色或白色(如適用),而深色頭髮或特徵(如適用)應明顯呈現深色。

D.2.4.7 Colour calibration
D.2.4.7 色彩校準

Colour calibration using an 18 % 18 % 18%18 \% grey background or other method (such as white balancing) is recommended.
建議使用 18 % 18 % 18%18 \% 灰色背景或其他方法(如白平衡)進行色彩校準。

D.2.4.8 Radial distortion of the camera lens
D.2.4.8 相機鏡頭徑向失真

The fish eye effect associated with wide angle lenses which can result in the subject appearing to have an unusually large nose in the image shall not be present.
不得出現與廣角鏡頭相關的魚眼效應,此效應可能導致拍攝對象在影像中呈現異常大的鼻子。
While some distortion is almost always present during face portrait photography, the distortion should not be noticeable by human examination.
在人像攝影中,雖然幾乎總是存在些微變形,但這種變形應不至於被人眼察覺。

D.2.4.9 Horizontally centred face
D.2.4.9 水平置中臉部

The approximate horizontal midpoints of the mouth and of the bridge of the nose define the imaginary line V (usually the symmetry axis of the face). Furthermore, the imaginary line H is defined as the line through the centres of the left and the right eye. The intersection of V and H defines the point M as the centre of the face. The X-coordinate M x M x M_(x)M_{x} of M M MM shall be between 45 % 45 % 45%45 \% and 55 % 55 % 55%55 \% of the image width.
以嘴巴與鼻樑的近似水平中點定義虛構線 V(通常為臉部對稱軸)。此外,虛構線 H 定義為通過左右眼中心的直線。V 線與 H 線的交點 M 即為臉部中心點。 M M MM 的 X 座標 M x M x M_(x)M_{x} 應位於影像寬度的 45 % 45 % 45%45 \% 55 % 55 % 55%55 \% 之間。

D.2.4.10 Vertical position of the face
D.2.4.10 臉部垂直位置

The Y-coordinate M y M y M_(y)\mathrm{M}_{\mathrm{y}} of M shall be between 30 % 30 % 30%30 \% and 50 % 50 % 50%50 \% of the image height. A single exception is allowed for children under the age of 11 years, in which case the higher limit shall be modified to 60 % 60 % 60%60 \% (i. e., the centre point of the head is allowed to be lower in the image for children under the age of 11). The origin 0 of the coordinate system is defined to be in the upper left corner of the image.
M 的 Y 座標 M y M y M_(y)\mathrm{M}_{\mathrm{y}} 應介於影像高度的 30 % 30 % 30%30 \% 50 % 50 % 50%50 \% 之間。僅允許對 11 歲以下兒童單一例外,在此情況下,上限值應調整為 60 % 60 % 60%60 \% (即 11 歲以下兒童的頭部中心點允許位於影像較低位置)。座標系統的原點 0 定義為影像的左上角。

D.2.4.11 Width of the image
D.2.4.11 影像寬度

To ensure that the entire face is visible in the image, the IED shall be between 25 % 25 % 25%25 \% and 37,5 % of the image width A.
為確保臉部完整呈現於影像中,IED 應介於影像寬度 A 的 25 % 25 % 25%25 \% 至 37.5%之間。

D.2.4.12 Height of the image
D.2.4.12 影像高度

In order to assure that the entire face is visible in the image, the minimum image height shall be specified by requiring that the eye-to-mouth distance (segment between M and (feature point 2.3 from ISO/IEC 14496-2:2004, Annex C) of the image shall be between 20 % 20 % 20%20 \% and 30 % 30 % 30%30 \% of the vertical height of the image B. A single exception is allowed for children under the age of 11 years, in which case the lower limit shall be modified to 15 % 15 % 15%15 \%.
為確保整張臉部在影像中清晰可見,應透過要求影像中眼至嘴距離(M 點與 ISO/IEC 14496-2:2004 附錄 C 特徵點 2.3 之間的線段)須佔影像 B 垂直高度的 20 % 20 % 20%20 \% 30 % 30 % 30%30 \% 比例,來規範最小影像高度。唯一年齡未滿 11 歲的兒童可允許單一例外情況,此時下限值應調整為 15 % 15 % 15%15 \%

D.2.4.13 Image aspect ratio
D.2.4.13 影像長寬比

The (image width: image height) aspect ratio should be between 1 : 1 , 25 1 : 1 , 25 1:1,251: 1,25 and 1 : 1 , 34 1 : 1 , 34 1:1,341: 1,34.
(影像寬度:影像高度)的長寬比應介於 1 : 1 , 25 1 : 1 , 25 1:1,251: 1,25 1 : 1 , 34 1 : 1 , 34 1:1,341: 1,34 之間。

D.2.4.14 Summary of photographic requirements
D.2.4.14 攝影要求摘要

Table D. 11 below summarizes the photographic requirements for general purpose face images.
下表 D.11 總結了一般用途人臉影像的攝影要求。

Table D. 11 - Summary of photographic requirements for general purpose face images
表 D.11 - 一般用途人臉影像攝影要求摘要
Clause  條款 Definition  定義 Requirements  要求事項
D.2.4.1 General requirement  一般要求 Head entirely visible in the image
頭部完整呈現於影像中
D.2.4.9 Horizontal position of the face
臉部水平位置
0 , 45 A M x 0 , 55 A 0 , 45 A M x 0 , 55 A 0,45A <= M_(x) <= 0,55A0,45 \mathrm{~A} \leq \mathrm{M}_{\mathrm{x}} \leq 0,55 \mathrm{~A}
D.2.4.10 Vertical position of the face
臉部垂直位置
0 , 3 B M y 0 , 5 B 0 , 3 B M y 0 , 5 B 0,3B <= M_(y) <= 0,5B0,3 \mathrm{~B} \leq \mathrm{M}_{\mathrm{y}} \leq 0,5 \mathrm{~B}
Clause Definition Requirements D.2.4.1 General requirement Head entirely visible in the image D.2.4.9 Horizontal position of the face 0,45A <= M_(x) <= 0,55A D.2.4.10 Vertical position of the face 0,3B <= M_(y) <= 0,5B| Clause | Definition | Requirements | | :--- | :--- | :--- | | D.2.4.1 | General requirement | Head entirely visible in the image | | D.2.4.9 | Horizontal position of the face | $0,45 \mathrm{~A} \leq \mathrm{M}_{\mathrm{x}} \leq 0,55 \mathrm{~A}$ | | D.2.4.10 | Vertical position of the face | $0,3 \mathrm{~B} \leq \mathrm{M}_{\mathrm{y}} \leq 0,5 \mathrm{~B}$ |
Table D. 11 (continued)
表 D. 11(續)
Clause  條款 Definition  定義 Requirements  需求
D.2.4.10 Vertical position of the face (children below 11)
臉部垂直位置(11 歲以下兒童)
0 , 3 B M y 0 , 6 B 0 , 3 B M y 0 , 6 B 0,3B <= M_(y) <= 0,6B0,3 \mathrm{~B} \leq \mathrm{M}_{\mathrm{y}} \leq 0,6 \mathrm{~B}
D.2.4.11 Width of head  頭部寬度 0 , 25 A IED 0 , 375 A 0 , 25 A IED 0 , 375 A 0,25A <= IED <= 0,375A0,25 \mathrm{~A} \leq \mathrm{IED} \leq 0,375 \mathrm{~A}
D.2.4.12 Length of head  頭部長度 0 , 2 B EMD 0 , 3 B 0 , 2 B EMD 0 , 3 B 0,2B <= EMD <= 0,3B0,2 \mathrm{~B} \leq \mathrm{EMD} \leq 0,3 \mathrm{~B}
D.2.4.12 Length of head (children below 11)
頭部長度(11 歲以下兒童)
0 , 15 B EMD 0 , 3 B 0 , 15 B EMD 0 , 3 B 0,15B <= EMD <= 0,3B0,15 \mathrm{~B} \leq \mathrm{EMD} \leq 0,3 \mathrm{~B}
Clause Definition Requirements D.2.4.10 Vertical position of the face (children below 11) 0,3B <= M_(y) <= 0,6B D.2.4.11 Width of head 0,25A <= IED <= 0,375A D.2.4.12 Length of head 0,2B <= EMD <= 0,3B D.2.4.12 Length of head (children below 11) 0,15B <= EMD <= 0,3B| Clause | Definition | Requirements | | :--- | :--- | :--- | | D.2.4.10 | Vertical position of the face (children below 11) | $0,3 \mathrm{~B} \leq \mathrm{M}_{\mathrm{y}} \leq 0,6 \mathrm{~B}$ | | D.2.4.11 | Width of head | $0,25 \mathrm{~A} \leq \mathrm{IED} \leq 0,375 \mathrm{~A}$ | | D.2.4.12 | Length of head | $0,2 \mathrm{~B} \leq \mathrm{EMD} \leq 0,3 \mathrm{~B}$ | | D.2.4.12 | Length of head (children below 11) | $0,15 \mathrm{~B} \leq \mathrm{EMD} \leq 0,3 \mathrm{~B}$ |

D.2.5 Digital requirements and recommendations
D.2.5 數位化要求與建議

D.2.5.1 Geometry  D.2.5.1 幾何結構

Digital cameras and scanners used to capture face images shall produce images with a pixel aspect ratio of 1 : 1 1 : 1 1:11: 1. That is, the number of pixels per inch in the vertical dimension shall equal the number of pixels per inch in the horizontal direction.
用於拍攝人臉影像的數位相機與掃描器,應產生像素長寬比為 1 : 1 1 : 1 1:11: 1 的影像。也就是說,垂直方向每英吋的像素數應與水平方向每英吋的像素數相同。
The origin of coordinates shall be at the upper left given by coordinate ( 0 , 0 ) ( 0 , 0 ) (0,0)(0,0) with positive entries from left to right (first dimension) and top to bottom (second dimension).
座標原點應位於左上角,由座標 ( 0 , 0 ) ( 0 , 0 ) (0,0)(0,0) 給出,正方向從左到右(第一維度)和從上到下(第二維度)。

D.2.5.2 Colour profile  D.2.5.2 色彩設定檔

General purpose face images shall be represented as one of the following:
通用用途的人臉影像應以下列其中一種方式呈現:

a) 24-bit RGB colour space where for every pixel, eight (8) bits will be used to represent each of the Red, Green, and Blue components.
a) 24 位元 RGB 色彩空間,其中每個像素將使用八(8)位元來表示紅、綠、藍各色彩元件。

b) 8-bit monochrome colour space where for every pixel, (8) bits will be used to represent the luminance component.
b) 8 位元單色色彩空間,其中每個像素將使用(8)位元來表示亮度元件。

c) YUV422 colour space where twice as many bits are dedicated to luminance as to each of the two colour components. YUV422 images typically contain two 8 -bit Y samples along with one 8 -bit sample of each of U U UU and V in every four bytes.
c) YUV422 色彩空間,其中分配給亮度的位元數是兩個色彩元件各別的兩倍。YUV422 影像通常每四個位元組包含兩個 8 位元 Y 樣本,以及 U U UU 和 V 各一個 8 位元樣本。
Interlaced video frames are not allowed for the general purpose face image kind. All interlacing shall be absent (not removed, but absent).
通用人臉影像類型不允許使用交錯視訊幀。所有交錯處理皆不得存在(非指移除,而是從一開始就不應存在)。

D.2.5.3 Use of near infrared cameras
D.2.5.3 近紅外線攝影機的使用

Dedicated near-infrared cameras shall not be used for acquisition of image of the general purpose face image kind.
專用的近紅外線攝影機不得用於擷取通用型人臉影像。

D.2.5.4 Pixel count  D.2.5.4 像素計數

For an image for optimal human examination and permanent storage, the head width shall be at least 180 pixels, or roughly 90 pixels from eye centre to eye centre.
為獲得最適合人工檢視及永久儲存的影像,頭部寬度應至少為 180 像素,或雙眼中心間距約為 90 像素。

D.2.5.5 Post acquisition processing
D.2.5.5 擷取後處理

No other post processing than in-plane rotation and/or cropping and/or down sampling and/or multiple compressions shall be applied to derive a general purpose face image from a captured image. Multiple (i.e., repeated) compressions should be avoided when generating general purpose face images.
從原始擷取影像生成通用臉部影像時,除平面旋轉、裁切、降採樣及多重壓縮外,不得進行其他後製處理。在生成通用臉部影像時,應避免多次(即重複)壓縮。

D. 3 3D Textured face image
D.3 3D 紋理臉部影像

D.3.1 General  D.3.1 概述

This annex contains the description of an application profile for a 3D textured face image that meets the requirements of this document to acquire an image for 3 D face recognition.
本附錄描述了一種符合本文件要求的 3D 紋理臉部影像應用規範,用於獲取 3D 人臉辨識所需的影像。
The purpose of the 3D textured application profile is to encode the shape and the texture of the face (see Figure D.38) with high precision. For some use cases, the texture of the face is optional. By an optical backward projection of the 3D presentation to a virtual camera with defined lighting, similar image quality of the skin rendering compared to a 2D representation data with different viewing angles should be obtained.
3D 紋理應用規範的目的在於高精度地編碼臉部形狀與紋理(參見圖 D.38)。在某些應用情境中,臉部紋理為可選項目。透過將 3D 呈現以光學方式反向投影至具有定義光源的虛擬攝影機,應能獲得與不同視角之 2D 呈現資料相仿的皮膚渲染影像品質。
The 3D shape representation block shall be present. The value of the 3D face image kind shall be textured face image 3D.
必須包含 3D 形狀表示區塊。3D 臉部影像種類的值應為紋理臉部影像 3D。
The 3D representation data element shall contain only the vertex representation, that means range image and 3D point map shall not be used.
3D 表示資料元素應僅包含頂點表示,這意味著不得使用深度影像和 3D 點雲圖。

Figure D. 38 - Example of a 3D textured image representation data which is composed of the 3D data representation (a list of triangles consisting of 3 vertices) and a Texture map data
圖 D.38 - 由 3D 資料表示(由 3 個頂點組成的三角形列表)和紋理貼圖資料組成的 3D 紋理影像表示資料範例
Each vertex of the 3D representation data has a 2D UV spatial reference to the Texture data. The data defined by the textured vertex representation is encoded in two different data structures:
3D 表示資料的每個頂點都具有指向紋理資料的 2D UV 空間參考。紋理頂點表示所定義的資料以兩種不同的資料結構進行編碼:
  • The mandatory 3D representation data for the shape of the face, where the 3D representation data is defined by a set of:
    臉部形狀的強制性 3D 表示數據,其中 3D 表示數據由以下一組數據定義:

    a) a Vertex block defined by the 3D coordinates X, Y, and Z and by the spatial coordinates U and V which refer to the Texture map, and
    a) 由 3D 座標 X、Y、Z 以及指向紋理貼圖的空間座標 U 和 V 所定義的頂點區塊,以及

    b) a Vertex triangle data block referring to the ordered list of vertices.
    b) 指向頂點有序列表的頂點三角形數據區塊。
  • The optional Texture map data for the texture image of the face, where one of the following possible encodings for the texture map data (which is a 2D image) shall be used:
    臉部紋理圖像的可選紋理貼圖數據,其中應使用以下可能的紋理貼圖數據(即 2D 圖像)編碼之一:

    a) JPEG sequential baseline (ISO/IEC 10918-1) mode of operation and encoded in the JFIF file format (the JPEG file format),
    a) JPEG 循序式基線(ISO/IEC 10918-1)操作模式,並以 JFIF 檔案格式(JPEG 檔案格式)編碼,

    b) JPEG-2000 Part-1 code stream format (ISO/IEC 15444-1), lossy or lossless, and encoded in the JP2 file format (the JPEG2000 file format), or
    b) JPEG-2000 第 1 部分代碼流格式(ISO/IEC 15444-1),可為有損或無損壓縮,並以 JP2 檔案格式(JPEG2000 檔案格式)編碼,或

    c) PNG specification (ISO/IEC 15948). PNG shall not be used in its interlaced mode and not for images that have been JPEG compressed before.
    c) PNG 規範(ISO/IEC 15948)。PNG 不得使用交錯模式,且不得用於先前經過 JPEG 壓縮的影像。
The formats unknown and other shall not be used. The specification of the Texture map image is stored inside of the image header, i.e., the image width, image height, channel numbers, number of bits per channel and ICC profile.
未知與其他格式不得使用。紋理貼圖影像的規格儲存於影像標頭內,例如影像寬度、影像高度、通道數量、每通道位元數以及 ICC 個人資料。
Landmarks shall be defined with 3D coordinates.
地標應以 3D 座標定義。

Landmarks should be determined on images before compression is applied. Landmarks should be included in the record format if they have been accurately determined, thereby providing the option that these parameters do not have to be re-determined when the image is processed for face recognition tasks. The landmarks may be determined by computer-automated detection mechanisms. If necessary, a human validation can be applied in such case. It is recommended to add the following landmarks to the encoding of a 3D image:
在進行壓縮處理前,應先於影像上標定特徵點。若特徵點已精確標定完成,則應將其納入記錄格式中,如此一來在後續進行人臉辨識任務時,便無需重新標定這些參數。特徵點可透過電腦自動偵測機制進行標定。必要時,可輔以人工驗證程序。建議在 3D 影像編碼中加入以下特徵點:
  • the eye centres (12.1 and 12.2),
    眼睛中心(12.1 和 12.2)
  • the base of the nose (9.4, 9.5, and 9.15), and
    鼻子的基部(9.4、9.5 和 9.15),以及
  • the upper lip of the mouth (8.4, 8.1 and 8.3).
    嘴巴的上唇(8.4、8.1 和 8.3)。

D.3.2 Image data compression requirements and recommendations
D.3.2 影像資料壓縮需求與建議

Best practice on compression without region of interest is:
最佳實務(無感興趣區域壓縮)為:

a) The compressed file size should not be smaller than 100 KB on average.
a) 壓縮檔案的平均大小不應小於 100 KB。

b) JPEG2000 should be preferred over JPEG.
b) 應優先選擇 JPEG2000 而非 JPEG。
NOTE For the Textured 3D representation, the texture map data refer only to the face texture description and not, e.g., to the background or the shoulders. As a consequence, there is no significant gain to use the implement region of interest (ROI) compression of JPEG2000.
註記:對於紋理 3D 表現形式,紋理貼圖資料僅指臉部紋理描述,不包含例如背景或肩膀部分。因此,使用 JPEG2000 的感興趣區域 (ROI) 壓縮功能並無顯著效益。

D.3.3 Scene requirements and recommendations
D.3.3 場景需求與建議

D.3.3.1 Pose of the 3D representation
D.3.3.1 3D 表現的姿態

Pose is known to strongly affect performance of automated face recognition systems. However, this sensibility is less important for 3D representation which can be rotated without losing information after acquisition.
已知姿態會強烈影響自動人臉辨識系統的表現。然而這種敏感性對於 3D 表徵而言較不重要,因為 3D 資料在獲取後可進行旋轉而不會損失資訊。
For the pose of the Textures 3D face image representation the following requirements on the subject position, and on the 3D acquisition system geometry apply:
關於紋理 3D 人臉影像表徵的姿態要求,需符合以下主體位置與 3D 擷取系統幾何條件:
  • Subject position:  主題位置:
The pose of the subject shall be with the head in the rest position. The eyes shall be looking straight forward according to the horizontal axis. The with shoulder position shall be perpendicular to the gaze axis.
受測者的姿勢應保持頭部處於自然放鬆位置。雙眼視線須沿水平軸直視前方。肩膀位置應與視線軸線呈垂直角度。
Rotation of the head shall be less than ± 5 ± 5 +-5^(@)\pm 5^{\circ} in pitch, yaw and roll.
頭部的旋轉在俯仰、偏航和滾轉方向上應小於 ± 5 ± 5 +-5^(@)\pm 5^{\circ}
  • 3D acquisition system geometry:
    3D 擷取系統幾何配置:
The optical axis of the 3D acquisition system shall be horizontal, at the same height of the eye subject, and perpendicular to the support line passing by the two eyes. Height adjustment may be done by either using a height-adjustable stool or by adjusting the acquisition system height.
3D 擷取系統的光軸應保持水平,與受測者眼睛同高,並垂直於通過雙眼的支撐線。高度調整可透過使用可調式座椅或調整擷取系統高度來完成。
The subject should be instructed to orient his or her gaze paralleled to the optical axis via a visible sign. The subject should be instructed to keep his or her head erect and shoulders square to the 3D acquisition system.
受試者應透過可見標誌將其視線調整至與光軸平行。受試者應保持頭部直立,並使肩膀與 3D 擷取系統呈直角。
The subject shall not move during the acquisition. In particular, no instruction shall request a subject movement during capturing.
擷取過程中受試者不得移動。特別注意,在拍攝期間不得發出任何要求受試者移動的指令。

ISO/IEC 39794-5:2019(E)

D.3.3.2 Expression  D.3.3.2 表情

Expression is known to strongly affect the performance of automated face recognition systems. It is recommended that the Expression element is present.
已知表情會強烈影響自動臉部辨識系統的效能。建議應包含表情元素。
The expression should be neutral (non-smiling) with both eyes open normally (i.e., not wide-open), and mouth closed (the mouth is closed if the distance between landmarks 2.2 and 2.3 is less than 50 % 50 % 50%50 \% of the distance of landmarks 2.3 and 8.2.) Every effort should be made supply images complying with this specification. To smile with closed or open jaw is not recommended, neither are raised eyebrows, eyes looking away from camera, squinting or frowning.
表情應保持中性(不微笑),雙眼正常睜開(非瞪大狀態),嘴巴閉合(若地標點 2.2 與 2.3 之間的距離小於地標點 2.3 與 8.2 之間距離的 50 % 50 % 50%50 \% ,則判定為嘴巴閉合)。應盡最大努力提供符合此規範的影像。不建議閉合或張開下巴微笑,也不建議挑眉、視線偏離鏡頭、瞇眼或皺眉。
The 3D textured face image acquisition system shall not allow expression change. The 3D face image acquisition process should be fast enough to ensure that expression does not change. Morphing and interpolation after the acquisition should not change the expression.
3D 紋理臉部影像擷取系統不得允許表情變化。3D 臉部影像擷取過程應足夠快速以確保表情不發生變化。擷取後的變形與插值處理不應改變表情。

D.3.3.3 Shoulders  D.3.3.3 肩膀

Shoulders shall be square on to the camera. Portrait style photographs where the subject is looking over one shoulder are not acceptable.
肩膀應正對鏡頭。被拍攝者側身回眸的肖像風格照片不予接受。

D.3.3.4 Background  D.3.3.4 背景

The background shall not be stored inside the 3D vertex encoding. A 3D acquisition system shall be able to differentiate the background from the face, based on the depth data along the Z Z ZZ axis.
背景不應儲存於 3D 頂點編碼內。3D 擷取系統須能根據沿 Z Z ZZ 軸的深度數據區分背景與臉部。
The minimal distance between face and background should be 400 mm .
臉部與背景之間的最小距離應為 400 毫米。

The colour of the background should be uniform and should have a contrast to skin and hairs.
背景顏色應保持一致,並與皮膚和頭髮形成對比。

NOTE Some configurations of the background can improve the segmentation of the head such as background far behind the subject, very dark or very light background, and appropriately coloured background.
註記:某些背景配置可改善頭部分割效果,例如主體後方遠處的背景、極暗或極亮的背景,以及適當著色的背景。
Reflexion on the face caused by the background should not affect the texture rendering of the face skin.
背景造成的臉部反光不應影響臉部皮膚的紋理呈現。

D.3.3.5 Subject and scene lighting
D.3.3.5 主體與場景照明

Lighting shall be equally distributed on the skin of the face. There shall be no significant direction of the light from the point of view of the subject. The ratio between the median intensity on a square region centred around landmarks 5.3 and 5.4 with side length 20 % 20 % 20%20 \% of the inter-eye distance shall be between 0,5 and 2,0.
臉部皮膚上的光源應均勻分佈。從主體角度觀察時,不應有明顯的單一方向性光源。以兩眼中心距 20 % 20 % 20%20 \% 為邊長,在 5.3 與 5.4 特徵點中心形成的方形區域內,其光強度中位數比值應介於 0.5 至 2.0 之間。
Lighting cannot be perfectly uniform and diffuse on the face during acquisition. In order to not create face texture inconstancy by a head movement during the 3D face image acquisition, any head movement should be avoided.
在採集過程中,臉部照明無法達到完全均勻與漫射狀態。為避免 3D 臉部影像採集時因頭部移動導致紋理不一致,應避免任何頭部位移。
In case of pattern projection during the acquisition, the projected patterns should not be perceivable by the subject in order to not perturbate the subject expression.
若採集過程採用圖案投影技術,所投射圖案不應被主體察覺,以免干擾其面部表情。
For RGB acquisition, the sensor of the camera shall capture the entire visible wavelength, basically the wavelength between 400 nm and 700 nm . This allows correct rendering of the natural colours as seen by humans. Unnaturally coloured lighting, i. e., yellow, red, etc., shall not be used. Care should be taken to adjust the white balance. A high colour rendering index is recommended for illumination.
在 RGB 影像擷取時,相機感測器應涵蓋完整可見光波長範圍,基本上介於 400 奈米至 700 奈米之間。如此才能正確呈現人眼所見的自然色彩。不應使用非自然色調光源(例如黃光、紅光等)。需注意調整白平衡設定,建議採用高演色性指數的照明光源。
Illumination shall not cause any red eye effect visible in the eyes and should not cause other lighting artefacts such as spots from a ring flash reducing the visibility of the eyes.
照明光源不得造成任何可見的紅眼效應,亦不應產生其他光學瑕疵(例如環形閃光燈造成的斑點)而影響眼睛區域的可辨識度。
The enrolment should be made in a controlled scene; the image should be captured with high signal-tonoise ratio. Noise is not information contained in the original scene but is created by the electronics due to a too high level of amplification. ISO sensitivity settings at values of ISO 100 and ISO 200 typically reduce noise; for high-quality cameras ISO 400 and ISO 800 may also be used. Noise can be minimized by correct exposure at a low ISO setting.
註冊程序應在受控環境中進行;影像擷取需具備高訊噪比。噪訊並非原始場景所含資訊,而是電子元件因過高增益所產生。將 ISO 感光度設定於 ISO 100 至 ISO 200 值域通常可降低噪訊;高階相機亦可採用 ISO 400 至 ISO 800 設定。透過低 ISO 值下的正確曝光可將噪訊降至最低。

D.3.3.6 Hot spots and specular reflections
D.3.3.6 光斑與鏡面反射

Hot spots (i.e., bright regions that result from light shining directly on the face) shall be absent. These artefacts typically occur when a high intensity focused light source is used for illumination. Diffused lighting, multiple balanced sources or other lighting methods shall be used.
熱點(即因光線直接照射臉部而產生的明亮區域)應不存在。當使用高強度聚焦光源進行照明時,通常會出現此類人工痕跡。應使用漫射照明、多個平衡光源或其他照明方法。
A single bare point shaped light source like a flash mounted on the 3D acquisition system is not acceptable for imaging. Instead, the illumination should be accomplished using other methods that meet the requirements specified in this clause.
安裝於 3D 擷取系統上的單一裸點狀光源(如閃光燈)不適用於影像擷取。相反地,應採用符合本條款規定要求的其他方法來完成照明。

D.3.3.7 Eye glasses  D.3.3.7 眼鏡

Enrollees shall not wear eye glasses during acquisition. Enrolment systems shall provide a smart user interface for enrollees who usually wear glasses considering their temporarily reduced reading capabilities.
受測者在採集過程中不得佩戴眼鏡。註冊系統應為平時佩戴眼鏡的受測者提供智能使用者介面,考量其暫時性閱讀能力降低的情況。
NOTE Eye glasses interfere with the 3D sensor with respect to several aspects. They hide some parts of the face. The lens might not be completely transparent for the capturing illumination leading to potentially wrong volume determination. The light direction emitted or captured by the 3D system acquisition is perturbed by the lenses.
註:眼鏡會在多個層面干擾 3D 感測器。它們會遮蓋臉部部分區域,鏡片可能無法完全透過來自採集光源的光線,導致體積判定可能出錯。3D 系統採集時發射或接收的光線方向也會受到鏡片干擾。
An exception applies when the subject asserts a medical reason to retain the glasses; in these cases, the dark glasses element in the Properties element shall be set to true, even if the glasses are not dark.
若受測者聲明有醫療需求必須佩戴眼鏡時可例外處理;此情況下,即使眼鏡非深色,仍應將 Properties 元素中的 dark glasses 元素設為 true。

D.3.3.8 Shadows in eye-sockets
D.3.3.8 眼窩陰影

There should be no shadows in the eye-sockets caused by the eyebrows. The iris and pupil of the eyes should be visible.
眉毛不應在眼窩處造成陰影。眼睛的虹膜與瞳孔應清晰可見。
NOTE This recommendation is intended to exclude images in which the eyes are closed (e.g. during a blink) or half closed.
註記:此建議旨在排除眼睛閉合(例如眨眼時)或半閉合的影像。

D.3.3.9 Head coverings  D.3.3.9 頭部覆蓋物

In cases where head coverings are present the head coverings element in the Properties block shall be set to true.
當有頭部覆蓋物存在時,屬性區塊中的頭部覆蓋物元素應設為「真」。
Head coverings and shadows caused by head coverings should be absent. An exception applies to cases in which a subject cannot remove a headdress, veil or scarf (e.g., for religious reasons). In such cases the capture process should minimize shadows and obscuration of the face features in the face region. This might involve adjustment of the head coverings.
頭部覆蓋物及其造成的陰影應不存在。例外情況適用於受測者因故無法移除頭飾、面紗或頭巾(例如宗教因素)。此類情況下,擷取程序應盡量減少臉部區域內的面部特徵陰影與遮蔽。這可能涉及調整頭部覆蓋物。

D.3.3.10 Visibility of pupils and irises
D.3.3.10 瞳孔與虹膜可見度

In cases where pupils or irises are not visible the pupil or iris not visible element in the Properties block shall be set to true.
當瞳孔或虹膜不可見時,屬性區塊中的瞳孔或虹膜不可見元素應設為「真」。

D.3.3.11 Lighting artefacts
D.3.3.11 照明偽影

There shall be no lighting artefacts visible on the skin of the face.
臉部皮膚上不得出現可見的照明偽影。

D.3.3.12 Eye patches  D.3.3.12 眼罩

Eye patches shall not be worn. An exception applies when the subject asserts a need to retain the patch (e.g., for a medical reason); in this case, the left eye patch or the right eye patch element in the Properties block shall be true.
不得佩戴眼罩。當受測者聲明需要保留眼罩(例如醫療原因)時適用例外;在此情況下,屬性區塊中的左眼罩或右眼罩元素應設為真值。

ISO/IEC 39794-5:2019(E)

D.3.3.13 Hair covering face
D.3.3.13 頭髮遮蓋臉部

Hair should not hide parts of the face and should not bring any shadow on the face.
頭髮不應遮蓋臉部任何部位,且不應在臉部造成陰影。

Hair should not cover any part of the eyes. It is recommended that hair should not cover landmarks 3.2, 3.8, 3.12 for the right eye and landmarks 3.1 , 3.7 3.1 , 3.7 3.1,3.73.1,3.7 and 3.11 for the left eye, as well as region above these points that measures 5 % 5 % 5%5 \% of inter-eye distance.
頭髮不得遮蓋眼睛任何部分。建議頭髮不應遮蓋右眼的 3.2、3.8、3.12 特徵點,以及左眼的 3.1 , 3.7 3.1 , 3.7 3.1,3.73.1,3.7 和 3.11 特徵點,同時也不應遮蓋這些點上方相當於 5 % 5 % 5%5 \% 眼距的區域。

During acquisition, hair should be swept back behind the ears and above the middle of the forehead whenever possible.
採集過程中,應盡可能將頭髮梳至耳後並高於前額中央位置。

D.3.4 3D acquisition system requirements and recommendations
D.3.4 3D 擷取系統需求與建議

D.3.4.1 Purpose  D.3.4.1 目的

This clause specifies photographic constraints for the capture of a 3D textured face image. Rather than impose a particular hardware and lighting capture system, this clause specifies the type of output from these systems that is expected.
本條款規範 3D 紋理臉部影像擷取時的攝影限制條件。與其強制規定特定硬體與光源擷取系統,本條款主要規範這些系統預期應輸出的影像類型。
Note that for digital images the normative requirements related to the minimum inter-eye distance impose further requirements on the minimum head size.
請注意,對於數位影像而言,與最小眼距相關的規範性要求會對頭部最小尺寸施加進一步限制。

D.3.4.2 Contrast and saturation
D.3.4.2 對比度與飽和度

For each patch of skin on the capture subject’s face, the gradations in textures shall be clearly visible, i.e., being of reasonable contrast. In this sense, there shall be no saturation (over or under exposure) on the face.
對於拍攝對象臉部的每個皮膚區域,其紋理漸變應清晰可見,即具有合理的對比度。在此意義上,臉部不得出現飽和現象(過度曝光或曝光不足)。
The colour saturation of a 24-bit colour image should be such that after conversion to greyscale, there are 7 bits of intensity variation in the face region of the image.
24 位元彩色影像的色彩飽和度應滿足:在轉換為灰階後,影像臉部區域的強度變化需達到 7 位元。

D.3.4.3 Focus and depth of field
D.3.4.3 對焦與景深

The subject’s captured image shall always be in focus from nose to ears and chin to crown. Although this may result in the background behind the subject being out of focus, this is not a problem.
拍攝對象的影像必須保持從鼻尖到耳朵、下巴到頭頂都清晰對焦。雖然這可能導致背景模糊,但這並非問題。

In a typical photographic situation, for optimum quality of the captured face, the f-stop of the lens should be set at two (or more) f-stops below the maximum aperture opening when possible to obtain enough depth of field.
在典型攝影情境中,為獲得最佳臉部影像品質,鏡頭光圈應盡可能設定在最大光圈值以下兩級(或更多)以獲取足夠景深。
All images shall have sufficient depth of focus to maintain better than two millimetres spatial sampling rate on the subject’s face features at time of capture.
所有影像必須具備充分景深,確保拍攝時臉部特徵的空間採樣率優於 2 毫米。
The focus and depth of field of the camera shall be set so that the subject’s area scanned is in focus.
相機的對焦與景深設定應確保掃描區域內的拍攝對象保持清晰。

A depth of field shall be at minimum 150 mm and should be 250 mm or even more.
景深最小應為 150 毫米,建議達到 250 毫米或以上。

Greater than one millimetre spatial sampling rate will be considered accomplished if the individual millimetre markings of rulers placed on the subject’s nose and ear facing the camera can be seen simultaneously in a captured test image.
若測試影像中能同時清楚呈現受測者鼻子與耳朵面向相機側所放置量尺的毫米刻度,即可判定空間採樣率優於 1 毫米。
If the camera lacks auto focus all subject positions will need to be maintained in a defined area for all image captures.
如果相機沒有自動對焦功能,所有拍攝對象的位置都必須在每次拍攝時保持在一個固定的區域內。

D.3.4.4 Greyscale density
D.3.4.4 灰階密度

The dynamic range of the image should have at least 7 bits of intensity variation (span a range of at least 128 unique values) in the face region of the image. The face region is defined as the region from crown to chin and from the left ear to the right ear. This recommendation may require 3D acquisition
影像的動態範圍在臉部區域應至少具有 7 位元的強度變化(涵蓋至少 128 個獨特數值的範圍)。臉部區域定義為從頭頂到下巴,以及從左耳到右耳的區域。此建議可能需要 3D 擷取技術。

system settings to be changed on an individual basis when the skin tone is excessively lighter or darker than the average (present) population.
當膚色明顯比現有平均人口更淺或更深時,需個別調整系統設定。

D.3.4.5 Unnatural colour
D.3.4.5 非自然色彩

Unnaturally coloured lighting, yellow, red, etc., shall not be used. Care shall be taken to correct the white balance of image capture devices. The lighting shall produce a face image with naturally looking flesh tones when viewed in typical examination environments. Images showing the red-eye effect, i.e., the common appearance of red eyes on photographs taken with a photographic flash when the flash is too close to the lens, are not acceptable. The iris and the iris colour shall be visible.
不應使用非自然色彩的照明,如黃色、紅色等。應注意校正影像擷取設備的白平衡。在典型檢視環境下,照明應使臉部影像呈現自然的膚色。出現紅眼效應(即攝影閃光燈過於靠近鏡頭時,照片中常見的眼睛發紅現象)的影像不予接受。虹膜及其顏色必須清晰可見。

D.3.4.6 Colour or greyscale enhancement
D.3.4.6 色彩或灰階增強

A process that overexposes or under-develops a colour or greyscale image for purposes of beauty enhancement or artistic pleasure is not allowed. The full spectrum shall be represented on the face image where appropriate. Teeth and whites of eyes shall be clearly light or white (when appropriate) and dark hair or features (when appropriate) shall be clearly dark.
為美化或藝術效果而過度曝光或顯影不足的彩色或灰階影像處理是不被允許的。臉部影像應適當地呈現完整光譜。牙齒與眼白部分應保持明顯明亮或白色(視情況而定),而深色頭髮或特徵(視情況而定)則應呈現明顯深色。

D.3.4.7 Colour calibration
D.3.4.7 色彩校準

Colour calibration using an 18 % 18 % 18%18 \% grey background or other method (such as white balancing) is recommended.
建議使用 18 % 18 % 18%18 \% 灰色背景或其他方法(例如白平衡)進行色彩校準。

D.3.4.8 Geometrical distortion and CSD of the 3D acquisition system
D.3.4.8 3D 擷取系統的幾何失真與 CSD

By definition, a 3D textured acquisition system is designed to precisely measure the shape and the texture of the face. The 3D scanned face data is an accurate representation of the true appearance of the face.
根據定義,3D 紋理採集系統旨在精確測量臉部的形狀與紋理。3D 掃描的臉部數據能準確呈現臉部的真實樣貌。
Geometrical distortion like magnification distortion and radial distortion which appears on conventional 2D acquisition system is compensated by a 3D acquisition system. This compensation is possible because each measured vertex has a known three-dimensional position and viewing angle.
傳統 2D 採集系統中出現的放大失真與徑向失真等幾何變形,可透過 3D 採集系統進行補償。此補償之所以可行,在於每個測量頂點皆具備已知的三維位置與視角參數。
For a 3D acquisition system, this Annex doesn’t specify a range for the camera subject distance. The 3D technology of a specific system might have implications on the required camera subject distance in order to achieve the requested accuracy. For many 3D face acquisition systems, this distance is between a few and some dozens of centimetres. For example, passive aero-triangulation 3D acquisition systems used for face acquisition request short distance like 50 cm in order to increase the parallax accuracy and to decrease the general size of the system.
對於 3D 擷取系統,本附錄並未規範相機與被攝物之間的距離範圍。特定系統的 3D 技術可能會影響所需的相機與被攝物距離,以達到要求的精確度。許多 3D 臉部擷取系統的此距離介於數公分至數十公分之間。舉例來說,被動式航空三角測量 3D 擷取系統用於臉部擷取時,要求較短距離如 50 公分,以提升視差精確度並縮小系統整體體積。
NOTE The 3D textured representation data can be projected to a 2D representation data just by defining the lighting and the virtual camera properties, i. e., the sensor resolution, the photo site size, the focal length, the camera angle, and the camera subject distance. As consequence, the magnification distortion can be simulated from a 3D textured face image representation which contains the true 3D geometrical representation of the face.
註記:3D 紋理表示資料僅需定義光源與虛擬攝影機屬性(例如感測器解析度、像素尺寸、焦距、攝影角度及拍攝主體距離),即可投影為 2D 表示資料。因此,可從包含真實臉部 3D 幾何表示的 3D 紋理臉部影像中模擬出放大失真效果。

D.3.4.9 Focal length  D.3.4.9 焦距

The selection of a camera and its lens is a major factor affecting the quality of face images. To ensure high image quality and a standard compliant inter-eye distance (IED), the camera’s sensor must have sufficient pixel dimensions and its lens must be chosen to match its image sensor’s physical dimensions.
相機及其鏡頭的選擇是影響人臉影像品質的主要因素。為確保高畫質影像及符合標準的瞳距(IED),相機感光元件必須具備足夠的像素尺寸,且鏡頭選擇需與影像感光元件的物理尺寸相匹配。
For a selected CSD (in millimetres), and acamera image sensor with a vertical dimension in millimetres of h m m h m m h_(mm)h_{m m} a requested vertical field of view of H FieldofView H FieldofView  H_("FieldofView ")H_{\text {FieldofView }} (in millimetres) and the focal length f f ff (in millimetres) can be computed using the following relationship in order to optimise the requested field of view of the subject with respect to the sensor dimensions:
針對選定的 CSD(以毫米為單位),以及垂直尺寸為 h m m h m m h_(mm)h_{m m} 毫米的相機影像感測器,若要求垂直視野為 H FieldofView H FieldofView  H_("FieldofView ")H_{\text {FieldofView }} 毫米,則可使用以下關係式計算焦距 f f ff 毫米,以最佳化主體相對於感測器尺寸的所需視野:
f h m m C S D H FieldOfView f h m m C S D H FieldOfView  f~=h_(mm)(CSD)/(H_("FieldOfView "))f \cong h_{m m} \frac{C S D}{H_{\text {FieldOfView }}}

ISO/IEC 39794-5:2019(E)

D.3.4.10 Sensor resolution in pixel
D.3.4.10 感測器像素解析度

The resolution of the textured images acquired by a 2 D CMOS/CCD sensor can be computed in the following way: For a camera image sensor with vertical pixel count of h p x h p x h_(px)h_{p x}, the inter-eye distance on the sensor in pixels I E D p x Sensor I E D p x Sensor  IED_(px)^("Sensor ")I E D_{p x}^{\text {Sensor }} may be computed using the following relationship, where I E D m m Subject I E D m m Subject  IED_(mm)^("Subject ")I E D_{m m}^{\text {Subject }} is the intereye distance in millimetres on the subject:
透過 2D CMOS/CCD 感測器獲取的紋理影像解析度可依下列方式計算:對於垂直像素數為 h p x h p x h_(px)h_{p x} 的相機影像感測器,感測器上以像素為單位的瞳距 I E D p x Sensor I E D p x Sensor  IED_(px)^("Sensor ")I E D_{p x}^{\text {Sensor }} 可透過以下關係式計算,其中 I E D m m Subject I E D m m Subject  IED_(mm)^("Subject ")I E D_{m m}^{\text {Subject }} 為被攝體上以毫米為單位的瞳距:
I E D m m Sensor = I E D m m Subject f C S D I E D m m Sensor  = I E D m m Subject  f C S D IED_(mm)^("Sensor ")=IED_(mm)^("Subject ")(f)/(CSD)I E D_{m m}^{\text {Sensor }}=I E D_{m m}^{\text {Subject }} \frac{f}{C S D}
and  
I E D p x Sensor = I E D m m Sensor h p x h m m I E D p x Sensor  = I E D m m Sensor  h p x h m m IED_(px)^("Sensor ")=IED_(mm)^("Sensor ")(h_(px))/(h_(mm))I E D_{p x}^{\text {Sensor }}=I E D_{m m}^{\text {Sensor }} \frac{h_{p x}}{h_{m m}}
EXAMPLE A camera has the following specification: APS-C sensor, 22 , 3 mm × 14 , 9 mm , 2592 px × 1944 px 22 , 3 mm × 14 , 9 mm , 2592 px × 1944 px 22,3mmxx14,9mm,2592pxxx1944px22,3 \mathrm{~mm} \times 14,9 \mathrm{~mm}, 2592 \mathrm{px} \times 1944 \mathrm{px}, 5 megapixels. For a CSD of 600 mm , a typical H FieldOfView H FieldOfView  H_("FieldOfView ")H_{\text {FieldOfView }} of 500 mm , a typical I E D m m Subject I E D m m Subject  IED_(mm)^("Subject ")I E D_{m m}^{\text {Subject }} of about 62 mm the calculations below show that the focal length f f ff will be about 25 mm . Then I E D p x Sensor I E D p x Sensor  IED_(px)^("Sensor ")I E D_{p x}^{\text {Sensor }} will be about 336 pixels, well above the requirement of 240 pixels. The finest detail measured by each pixel of the camera is H FieldOfView h p x H FieldOfView  h p x (H_("FieldOfView "))/(h_(px))\frac{H_{\text {FieldOfView }}}{h_{p x}} which corresponds for this example to around 0 , 2 mm 0 , 2 mm 0,2mm0,2 \mathrm{~mm}.
範例:相機具有以下規格:APS-C 感光元件、 22 , 3 mm × 14 , 9 mm , 2592 px × 1944 px 22 , 3 mm × 14 , 9 mm , 2592 px × 1944 px 22,3mmxx14,9mm,2592pxxx1944px22,3 \mathrm{~mm} \times 14,9 \mathrm{~mm}, 2592 \mathrm{px} \times 1944 \mathrm{px} 、500 萬畫素。對於 600 毫米的 CSD、典型的 H FieldOfView H FieldOfView  H_("FieldOfView ")H_{\text {FieldOfView }} 為 500 毫米、典型的 I E D m m Subject I E D m m Subject  IED_(mm)^("Subject ")I E D_{m m}^{\text {Subject }} 約 62 毫米,以下計算顯示焦距 f f ff 約為 25 毫米。此時 I E D p x Sensor I E D p x Sensor  IED_(px)^("Sensor ")I E D_{p x}^{\text {Sensor }} 約為 336 像素,遠高於 240 像素的要求。相機每個像素測量的最精細細節為 H FieldOfView h p x H FieldOfView  h p x (H_("FieldOfView "))/(h_(px))\frac{H_{\text {FieldOfView }}}{h_{p x}} ,在此範例中對應約 0 , 2 mm 0 , 2 mm 0,2mm0,2 \mathrm{~mm}

D.3.5 Digital requirements and recommendations
D.3.5 數位需求與建議

D.3.5.1 Geometry  D.3.5.1 幾何

D.3.5.1.1 Texture map geometry
D.3.5.1.1 紋理貼圖幾何

The origin of coordinates of the texture map shall be at the upper left given by coordinate ( 0 , 0 ) ( 0 , 0 ) (0,0)(0,0) with positive entries from left to right (first dimension) and top to bottom (second dimension).
紋理貼圖的座標原點應位於左上角,由座標 ( 0 , 0 ) ( 0 , 0 ) (0,0)(0,0) 給出,正方向從左到右(第一維度)和從上到下(第二維度)。

D.3.5.1.2 3D data geometry
D.3.5.1.2 3D 資料幾何

The vertex origin after scale and offset application shall be the centre of the two eyes, i.e., the midpoint between the left eye centre (12.1) and the right eye centre (12.2).
經比例與偏移調整後的頂點原點應為雙眼中心點,即左眼中心(12.1)與右眼中心(12.2)之間的中點。
The measuring unit used for 3D represention face image data after scale application shall be millimetre.
經比例調整後用於 3D 人臉影像資料的測量單位應為公釐。

The face orientation after scale and offset application shall follow:
經比例與偏移調整後的人臉方位應遵循:
  • Horizontal axis (x), passing between the left eye centre (12.1) and the right eye centre (12.2), and oriented to the left eye centre direction.
    水平軸(x),通過左眼中心(12.1)和右眼中心(12.2)之間,並朝向左眼中心方向。
  • Depth axis (z), defined by the standard position of the face which is determined when the head is in the rest position and the eye is looking straight forward.
    深度軸(z),由臉部的標準位置定義,該位置在頭部處於休息姿勢且眼睛直視前方時確定。
  • Vertical axis (y), defined by the right hand rule from the two other axis.
    垂直軸(y),根據右手定則由其他兩個軸定義。
The pitch of the face in rest position corresponds to common face acquisition. This pitch is slightly different to the pitch defined by the frontal pose which is associated to the Frankfurt Horizon. The Frankfurt Horizon rule is not adapted here for several reasons. The head pitch of the Frankfurt Horizon doesn’t correspond to general attitude in rest position of all ethnic groups and involves discomfort and bad eye position in the orbit (Figure D.39). The Frankfurt Horizon is related to the ear position which might not be not scanable as it is, e.g., covered by hair.
靜止狀態下的臉部俯仰角度與常見的臉部擷取方式相符。此俯仰角度與法蘭克福水平線定義的正視姿勢略有不同。此處不採用法蘭克福水平線規則有幾個原因:法蘭克福水平線的頭部俯仰角度並不符合所有族群在靜止狀態下的普遍姿態,且會導致眼眶內眼球位置不適(參見圖 D.39)。法蘭克福水平線與耳朵位置相關,而耳朵可能因被頭髮遮蓋等因素而無法被掃描。

a) Head orientation to the Frankfurt Horizon doesn't correspond to the eye direction
a) 頭部朝向法蘭克福水平面與視線方向不符

b) Head pitch is oriented according to the rest position: the face pose doesn’t follow the Frankfurt Horizon
b) 頭部俯仰角度依據靜止位置定向:臉部姿勢不遵循法蘭克福水平線

Key  關鍵

1 Frankfurt Horizon  1 法蘭克福地平線
2 eye direction  2 視線方向
3 lowest point of the right eye socket
3 右眼眶最低點

4 tragion  4 耳屏點
Figure D. 39 - Frankfurt Horizon in relation to eye position/direction
圖 D. 39 - 法蘭克福水平面與眼睛位置/方向的關係

D.3.5.2 Colour profile  D.3.5.2 色彩設定檔

3D textured images shall be represented as one of the following. The captured image shall be a truecolour representation of the subject in a typical colour space such as sRGB as specified in IEC 61966-2. Other true-colour representations may be used, but in all cases the ICC colour profile shall be embedded inside the textured map for all formats (JPEG, JPEG-2000 and PNG):
3D 紋理圖像應以下列其中一種方式呈現。拍攝的圖像應為被攝物體在典型色彩空間(如 IEC 61966-2 規定的 sRGB)中的真實色彩表現。亦可使用其他真實色彩表現方式,但在所有情況下,無論採用何種格式(JPEG、JPEG-2000 或 PNG),都必須在紋理貼圖中嵌入 ICC 色彩描述檔。

a) 24 bit or 48 -bit RGB colour space where for every pixel, 8 bits or 16 bits will be used to represent each of the red, green, and blue components.
a) 24 位元或 48 位元 RGB 色彩空間,其中每個像素將使用 8 位元或 16 位元來表示紅、綠、藍各顏色元件。

b) 8 bit or 16 -bit monochrome colour space where for every pixel, 8 bits or 16 bits will be used to represent the luminance component.
b) 8 位元或 16 位元單色色彩空間,其中每個像素將使用 8 位元或 16 位元來表示亮度元件。
RGB acquisition is recommended.
建議採用 RGB 擷取方式。

Colour quality should be measured in terms of colour error using the CIEDE2000 formula (deltaE2000) of a standardized test pattern. The average deltaE2000 of all colour patches should not exceed 10 for camera systems. The maximum deltaE2000 for any colour patch should not exceed 20 for camera systems. Measured CIELAB Lab* human skin tone a a a^(**)a^{*} and b b b^(**)b^{*} values shall be positive. Negative a a a^(**)a^{*} and b b b^(**)b^{*} values are acceptable only for medical reasons.
色彩品質應以標準化測試圖樣的 CIEDE2000 公式(deltaE2000)所計算之色差進行量測。對於攝影機系統而言,所有色塊的平均 deltaE2000 值不得超過 10。任何單一色塊的最大 deltaE2000 值不得超過 20。量測所得 CIELAB Lab*人體膚色 a a a^(**)a^{*} b b b^(**)b^{*} 數值應為正值。僅在醫療需求情況下,方可接受負值的 a a a^(**)a^{*} b b b^(**)b^{*} 數值。
Interlaced video frames shall not be used for the 3D textured image type. All interlacing shall be absent.
交錯式視訊幀不得用於 3D 紋理影像類型。所有交錯掃描均應消除。

D.3.5.3 Use of near infrared cameras
D.3.5.3 近紅外線攝影機的使用

If dedicated near infrared cameras are used, one should be aware that the interoperability between white light and near infrared images might be reduced.
若使用專用的近紅外線攝影機,應注意白光與近紅外線影像之間的互通性可能會降低。

ISO/IEC 39794-5:2019(E)

D.3.5.4 Spatial sampling rate
D.3.5.4 空間取樣率

When acquisition is done in visible spectra the spatial sampling rate of the texture map shall be such that the IED is at least 240 pixels.
當採集在可見光譜中進行時,紋理圖的空間採樣率應使 IED 至少達到 240 像素。
The 3D representation data shall be able to measure shape variation of a size of less than 5 mm on all axes and should be able to measure shape variation of a size of less than 2 mm on all axes.
3D 表示資料應能測量所有軸向上小於 5 毫米的形狀變化,且應能測量所有軸向上小於 2 毫米的形狀變化。

D.3.5.5 Post-acquisition processing
D.3.5.5 擷取後處理

No post-processing other than creation of the 3D representation data and the corresponding texture, 3D rotation, cropping, downsampling and/or multiple compressions shall be applied to derive a 3D face image from a captured image. Multiple (i.e., repeated) compressions should be avoided when generating 3D textured map images.
除建立 3D 表示資料及對應紋理、3D 旋轉、裁切、降採樣及/或多重壓縮外,不得對擷取影像進行任何後處理以衍生 3D 臉部影像。生成 3D 紋理貼圖影像時應避免多重(即重複)壓縮。

D.3.5.6 Calibration texture projection accuracy
D.3.5.6 校準紋理投影準確度

The calibration accuracy of the acquisition device shall be high enough such that the mean shift between the texture of the 2D image and the 3D data is less than 1 mm .
採集裝置的校準精度必須足夠高,使得 2D 影像紋理與 3D 數據之間的平均偏移量小於 1 毫米。

D.3.6 Requirements on 3D textured image representation
D.3.6 3D 紋理影像表示要求

D.3.6.1 Coordinate system type
D.3.6.1 座標系統類型

The coordinate system type shall be Cartesian. Vertex coordinates are positive and coded with no decimal inside a range from 0 to 65535 (unsigned short).
座標系統類型應為笛卡爾座標系。頂點座標為正值,並以 0 至 65535(無符號短整數)範圍內的整數編碼,不包含小數。

D.3.6.2 Scales and offsets
D.3.6.2 比例與偏移量

The face centre shall be at origin after application of scales factor and the offset. As consequence, the 3 Offsets OffsetX, OffsetY and OffsetZ shall be negative.
在應用比例因子和偏移量後,臉部中心應位於原點。因此,三個偏移量 OffsetX、OffsetY 和 OffsetZ 應為負值。
The transformation to metric coordinates is described by appropriate scaling factors. The unit of the scale factor is the millimetre. The scaling factors shall be the same on all three axes. (ScaleX = ScaleY = ScaleZ). The vertex precision in millimetres is given by the scale value.
轉換為公制座標是透過適當的比例因子來描述。比例因子的單位為毫米。所有三個軸的比例因子應相同(ScaleX = ScaleY = ScaleZ)。頂點精度以毫米為單位,由比例值給出。
As an example, if the scaling factor is set to 0 , 1 mm 0 , 1 mm 0,1mm0,1 \mathrm{~mm}, then the vertex encoding leads to a precision of 0 , 1 mm 0 , 1 mm 0,1mm0,1 \mathrm{~mm}, and the range shape covers a cube of around 2768 mm ± 3 mm 2768 mm ± 3 mm 2768mm+-3mm2768 \mathrm{~mm} \pm 3 \mathrm{~mm} which allows all face and body encoding. This scaling factor of 0 , 1 mm 0 , 1 mm 0,1mm0,1 \mathrm{~mm} should be of sufficient precision for most biometric applications.
舉例來說,若縮放係數設定為 0 , 1 mm 0 , 1 mm 0,1mm0,1 \mathrm{~mm} ,則頂點編碼可達 0 , 1 mm 0 , 1 mm 0,1mm0,1 \mathrm{~mm} 的精確度,而範圍形狀涵蓋約 2768 mm ± 3 mm 2768 mm ± 3 mm 2768mm+-3mm2768 \mathrm{~mm} \pm 3 \mathrm{~mm} 的立方體空間,足以容納所有臉部與身體編碼。此 0 , 1 mm 0 , 1 mm 0,1mm0,1 \mathrm{~mm} 的縮放係數對多數生物辨識應用而言應具備足夠精確度。

D.3.6.3 Vertex information
D.3.6.3 頂點資訊

The vertex identifier shall not be present. The vertex identifier is implicitly given by the stack order of the vertex list. The first vertex shall have the index 0 .
頂點識別碼不應存在。頂點識別碼由頂點列表的堆疊順序隱含給定。第一個頂點的索引應為 0。
The index defined in the vertex triangle data block refers to this implicit order.
頂點三角形資料區塊中定義的索引參照此隱含順序。

Three vertices, connected to each other by three edges, define a face called triangle. These vertices are listed in counter clockwise order when looking at the face from the outside. This rule defines the normal of each triangle. Each triangle shall share two vertices with each of its adjacent triangles.
三個頂點透過三條邊相互連接,定義出稱為三角形的面。從外部觀察此面時,這些頂點會以逆時針方向排列。此規則定義了每個三角形的法向量。每個三角形應與其相鄰三角形共享兩個頂點。
If there is any hole on the pupils, brows or nostrils, it may be filled. For enrolment, the 3D shape composed by all the triangles shall not have any hole inside the area of the front of the head. The usage of the Vertex textures block is mandatory.
若瞳孔、眉毛或鼻孔區域存在任何孔洞,可進行填補處理。在註冊階段,由所有三角面構成的 3D 頭部正面區域模型內部不得存在任何孔洞。必須使用頂點紋理區塊(Vertex textures block)。
The vertex normals block shall not be present. If necessary, vertex normals can be recomputed from the normals of its neighbours.
不得包含頂點法線區塊(vertex normals block)。如有需要,可從相鄰面法線重新計算頂點法線。

D.3.6.4 Texture map  D.3.6.4 紋理貼圖

For biometric recognition purposes, either shape data without texture data, or a combination of shape data and texture data can be used. As a consequence, the texture map is optional.
就生物辨識目的而言,可單獨使用不含紋理資料的形狀資料,或結合形狀資料與紋理資料共同使用。因此,紋理貼圖屬於選用項目。
The Texture projection matrix block shall not be used as each vertex texture value is already defined by the spatial coordinates UV which refer to the texture map.
紋理投影矩陣區塊不應被使用,因為每個頂點的紋理值已由參考紋理貼圖的空間座標 UV 所定義。
The acquisition period is defined by the absolute difference between the acquisition end and the acquisition start time. During this period neither the subject nor the acquisition system shall move or be moved.
採集時段定義為採集結束時間與採集開始時間之絕對差值。在此期間,被採集者與採集系統皆不得移動或被移動。

D.3.6.5 Face area scanned
D.3.6.5 臉部掃描區域

The Face area scanned element shall include the area of the head which is covered by the scanned data. It may have the following values: front of the face, ears, chin, neck, back of the face or full head. The presence of the front of the face is mandatory.
臉部掃描區域元素應包含頭部被掃描資料覆蓋的區域。其可能包含以下數值:臉部正面、耳朵、下巴、頸部、臉部背面或整個頭部。臉部正面的存在為必要條件。

Annex E
(informative)  附錄 E(參考性)

Additional technical considerations
其他技術考量事項

E. 1 Setup examples for face portrait capturing
E.1 人臉肖像擷取設定範例

E.1.1 General  E.1.1 概述

The implementation of the face portrait acquisition setup should be done considering the properties of the different technologies and environments. Some face portrait acquisition setups are detailed below. This annex lists examples in no particular order and does neither recommend technologies mentioned here nor exclude technologies not mentioned here.
臉部肖像採集設備的設置應考量不同技術與環境特性來實施。以下詳述幾種臉部肖像採集配置方式。本附錄所列範例並無特定順序,既不推薦所提及之技術,亦不排除未提及之技術。

E.1.2 Studio environment with one single light
E.1.2 單光源攝影棚環境

A single light and multiple reflector panels illuminate the subject’s face uniformly. The light with a reflector should be placed approximately 35 35 35^(@)35^{\circ} above the line between the camera and the subject and be directed toward the subject’s face at a horizontal angle of less than 45 45 45^(@)45^{\circ} from the line. A reflector panel should be placed on the subject’s opposite side to prevent shadows on the face. Optionally, an additional reflector may be placed below and in front of the subject’s face to illuminate the area around the chin. See Figure E.1.
採用單一光源搭配多組反光板可使受測者臉部均勻受光。裝設反光罩的光源應置於攝影機與受測者連線上方約 35 35 35^(@)35^{\circ} 處,並以水平夾角小於 45 45 45^(@)45^{\circ} 的角度對準臉部。需在受測者另一側放置反光板以避免臉部陰影。可選擇性地在臉部下方與前方增設反光板以強化下巴周圍光線。參見圖 E.1。

Figure E. 1 - Example of a setup with one single light
圖 E.1 - 單光源配置範例

E.1.3 Studio environment with two lights
E.1.3 配備雙光源的攝影棚環境

Two lights with reflectors should be placed approximately 35 35 35^(@)35^{\circ} above the line between the camera lens and the subject. Both lights should be placed within 45 45 45^(@)45^{\circ} of the line between the camera lens and the subject. The optional plane reflector in front of the subject supplies additional light around and below the subject’s chin. See Figure E.2.
兩盞配有反光罩的燈光應放置於相機鏡頭與拍攝主體連線上方約 35 35 35^(@)35^{\circ} 處。兩盞燈光皆應置於相機鏡頭與拍攝主體連線的 45 45 45^(@)45^{\circ} 範圍內。位於拍攝主體前方的選配平面反光板可為下巴周圍及下方區域提供補光。詳見圖 E.2。

Figure E. 2 - Example of a setup with two front lights
圖 E.2 - 雙前側光源配置範例

E.1.4 Studio environment with two lights and background illumination
E.1.4 配備雙光源與背景照明的攝影棚環境

A background light is added to the arrangement in E.1.3 to eliminate shadows visible on the background behind the head. The background light should be targeted to the background and be placed directly behind and below the subject. See Figure E.3.
在 E.1.3 的配置中增加背景燈光以消除頭部後方背景可見的陰影。背景燈應對準背景,並直接置於拍攝對象後方下方。參見圖 E.3。

Figure E. 3 - Example of a setup with two front lights and one background light
圖 E.3 - 配置兩盞前燈與一盞背景燈的範例

E.1.5 Photo booth environment
E.1.5 照相亭環境

The requirements for a good face portrait also apply for a photo booth and an operator should make their best effort to get as close as possible to the recommendations given in this document. In studio environments, the human element is capable of checking the quality assurance; this capability should be replaced by automated quality assurance technology in photo booths and kiosks.
優質人像照的要求同樣適用於照相亭,操作人員應盡最大努力達成本文件建議標準。在攝影棚環境中,可由人工進行品質檢查;而在照相亭與自助拍照機中,此功能應由自動化品質檢測技術取代。
In a photo booth, multiple lights should be positioned symmetrically behind a diffuser panel above and aside of the camera to provide uniform lighting on the subject’s face and to eliminate glare and shadows visible on the face. See Figures E. 4 and E.7. A background light should be placed on the ground between the background and the subject. The front lights should be placed at an angle of approximately 35 35 35^(@)35^{\circ} above the line between the camera and the subject’s head to prevent reflection artefacts on the
在攝影棚內,應於相機上方及側邊的擴散板後方對稱配置多盞燈光,以確保主體臉部光線均勻,並消除臉部可見的反光與陰影。詳見圖 E.4 與圖 E.7。背景燈應置於背景與主體之間的地面上。正面燈光應以約 35 35 35^(@)35^{\circ} 角度架設於相機與主體頭部連線上方,以避免

subject’s glasses. The inside walls should be white and serve as reflectors. Directly behind the subject should be no directly reflecting material. The interior lights of the booth should be kept switched on during operation to reduce red eye effects. Direct or indirect lighting from below and in front of the subject should be used to eliminate shadows around the chin. An opaque curtain should be used and be closed during capturing to eliminate external light effects.
主體眼鏡產生反光假影。內部牆面應為白色以作為反光板使用。主體正後方不應放置直接反光材質。攝影棚內部燈光於運作期間應保持開啟以減少紅眼現象。應使用來自主體下方及前方的直接或間接照明來消除下巴周圍陰影。拍攝時應使用不透明布簾並保持閉合狀態,以排除外部光源影響。
Proper positioning of the subject and control of the subject’s pose may be improved through feedback provided to the subject via a mirror or a live-video monitor. A height-adjustable seat or camera should be provided to allow the subject to face the camera. See Figure E.5. Alternatively, the camera may be movable to adjust the height to the head position. See Figure E.6.
透過鏡子或即時影像監視器提供給受測者的回饋,可以改善受測者的正確定位與姿勢控制。應提供可調整高度的座椅或相機,讓受測者能正對鏡頭。參見圖 E.5。或者,相機可移動以調整高度至頭部位置。參見圖 E.6。

Figure E.4- Example of a photo booth setup: Front view
圖 E.4- 照相亭設置範例:正面視圖

Figure E. 5 - Example of a photo booth setup: Side view
圖 E.5- 照相亭設置範例:側面視圖

Figure E. 6 - Example of a photo booth setup: Side view with height adjustable camera
圖 E.6- 照相亭設置範例:可調高度相機的側面視圖

Figure E. 7 - Example of a photo booth setup: Top view
圖 E.7 - 照相亭設置範例:俯視圖

E.1.6 Registration office environment
E.1.6 登記處環境

The requirements for a good face portrait also apply in a registration office environment and an operator should try to get as close as possible to the recommendations given in this document. One should have in mind that such an easier setup regularly leads to face portraits of suboptimal quality and should therefore not be the preferred solution.
優質人像照片的要求同樣適用於登記處環境,操作人員應盡可能遵循本文件所給出的建議。需注意的是,這種簡易設置通常會導致人像照片品質欠佳,因此不應作為首選方案。
The subject and the background should be illuminated by two diffuse light sources that are mounted in a console with a small footprint, so that it fits into a typical registration office environment. The console may be mounted on the floor or on the wall. Flash light should not be used, at most in combination with appropriate permanent illumination. The main illumination during capturing should be that of the capturing system. Illumination mainly by the room illumination from the roof lights, the window or desk light is not acceptable, as well as direct sun light. Even if the office conditions might require much easier setups, the principle requirement of a uniform illumination remains valid.
主體與背景應由兩個漫射光源照明,這些光源安裝在佔地空間小的控制台中,以適應典型登記處環境。控制台可安裝於地板或牆面。不應使用閃光燈,最多只能搭配適當的常設照明使用。拍攝時的主要照明應來自拍攝系統本身。主要依靠天花板燈光、窗戶或桌燈的室內照明是不可接受的,直接陽光照射亦然。即使辦公室條件可能需要更簡易的設置,均勻照明的基本要求仍然必須遵守。
A revolving and height-adjustable chair or stool with an additional cushion for smaller capture subjects should be provided to allow the subject to face the camera and adjust his head to the proper height. See Figure E.8.
應提供一把可旋轉且高度可調的椅子或凳子,並附有額外坐墊供體型較小的拍攝對象使用,以便讓受測者能面向相機並將頭部調整至適當高度。詳見圖 E.8。

Figure E. 8 - Example of a registration office environment setup
圖 E.8 - 登記處環境配置範例
Feedback should be provided to the subject via a second live-video monitor facing to the subject for positioning and behaviour guidance. An image preview should be offered to allow a subject to choose from a selection of face portraits or to be recaptured if necessary, before the final face portrait is submitted for further processing.
應透過第二台面向受測者的即時影像監視器提供回饋,以進行姿勢調整與行為引導。在最終提交臉部肖像進行後續處理前,應提供影像預覽功能,讓受測者能從多張臉部肖像中選擇,或於必要時重新拍攝。
However, empirical data from production environments indicates that subjects who see the live view enter into a “vanity mode”. This can significantly reduce the throughput of the process and the quality of the captured biometric data. As an alternative to live view, visual, graphical or verbal instructions should be provided to the subject to reach optimal face and body posture.
然而,來自實際生產環境的實證數據顯示,受測者看到即時畫面時會進入「虛榮模式」。這可能大幅降低流程的吞吐量與所擷取生物特徵資料的品質。作為即時畫面的替代方案,應向受測者提供視覺、圖形或口頭指示,以達到最佳臉部與身體姿勢。

E.1.7 Setup with flash
E.1.7 閃光燈設置

If carefully applied, flashes may be used. In this case the quality requirements especially with respect to shadows, homogenous illumination, and absence of reflections in the eyes have to be maintained. See Figure E.9. The given distance measures are examples.
若謹慎使用,閃光燈是可被採用的。在此情況下,必須維持品質要求,特別是關於陰影、均勻照明,以及眼睛中無反射等條件。參見圖 E.9。所給出的距離測量僅為範例。

Figure E. 9 - Example of a setup with flash, top and side view
圖 E. 9 - 閃光燈設置範例,頂視圖與側視圖
The given measures are examples.
所給出的測量方式僅供參考。

E. 2 Measuring magnification and radial distortion in a face portrait capture setup
E.2 臉部肖像擷取裝置中的放大倍率與徑向失真測量

E.2.1 General  E.2.1 總則

This annex deals with two kinds of distortion.
本附錄處理兩種失真情況。

The first kind is the magnification distortion, a geometrical effect of the optical perspective. Some optical systems image objects of the same size differently depending on the distance between object and sensor. Such magnification distortion always appears in human vision.
第一種是放大畸變,這是由光學透視產生的幾何效應。某些光學系統會根據物件與感測器之間的距離,對相同大小的物件產生不同的成像效果。這種放大畸變在人眼視覺中始終存在。
The second kind is the radial distortion (barrel distortion, pincushion distortion, moustache distortion) caused by optical properties of a lens. This feature doesn’t exist for human vision.
第二種是由鏡頭光學特性引起的徑向畸變(桶形畸變、枕形畸變、鬍鬚畸變)。這種特性在人眼視覺中並不存在。
Both magnification and radial distortion may influence the performance of automated face recognition systems as well as of human recognition. Therefore, this Annex describes how to measure various types of distortion using two targets.
放大和徑向失真都可能影響自動人臉辨識系統以及人類辨識的表現。因此,本附錄說明如何使用兩個目標來測量各種類型的失真。

E.2.2 Magnification distortion target construction
E.2.2 放大畸變測試圖製作

To build the magnification distortion target, take Figure E.10, print it on A4 size paper, and fold it into a T shape. One should look at the foot of the T , the head bar is on the opposite side away from the viewer. The length of the T leg is the typical eye to nose distance of a Caucasian ( 50 mm ). In a photograph of the magnification distortion target taken as described in E.2.4, the relative size of an object near to the eye level and near to the nose tip level of a human can be measured. Additionally, it can be observed if the image is sharp enough at the entire face region including nose and eyes looking at the visibility of the 0 , 5 mm 0 , 5 mm 0,5mm0,5 \mathrm{~mm} and 1 mm wide markers in the squares. See Figure E.11.
要建置放大畸變測試圖,請取用圖 E.10,將其列印在 A4 尺寸紙張上,並摺疊成 T 字形。觀察時應注視 T 形的底部,頂部橫桿位於遠離觀測者的另一側。T 形垂直部分的長度為高加索人種典型的眼睛至鼻子距離(50 公釐)。按照 E.2.4 節所述方法拍攝放大畸變測試圖時,可測量接近人眼高度與鼻尖高度處物件的相對大小。此外,透過觀察方格中 0 , 5 mm 0 , 5 mm 0,5mm0,5 \mathrm{~mm} 標記與 1 公釐寬標記的可見度,可判斷包含鼻子與眼睛的整個臉部區域影像是否足夠清晰。詳見圖 E.11。
A
B C
1
B C 1| B C | | :--- | | 1 |
c D
2
3
4
5 目自自自  《ISO IEC 39794-5-2019》
8
A "B C 1" c D https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=443&width=458&top_left_y=2138&top_left_x=515 2 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=429&width=458&top_left_y=2159&top_left_x=6034 3 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=458&width=458&top_left_y=3711&top_left_x=523 4 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=429&width=458&top_left_y=3726&top_left_x=6034 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=436&width=458&top_left_y=4484&top_left_x=523 5 目自自自 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=443&width=487&top_left_y=4473&top_left_x=6005 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=436&width=451&top_left_y=6057&top_left_x=526 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=821&width=559&top_left_y=5836&top_left_x=3155 https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=429&width=428&top_left_y=6061&top_left_x=6049 8 | A | B C <br> 1 | c | D | | :--- | :--- | :--- | :--- | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=443&width=458&top_left_y=2138&top_left_x=515) | 2 | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=429&width=458&top_left_y=2159&top_left_x=6034) | | | 3 | | | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=458&width=458&top_left_y=3711&top_left_x=523) | 4 | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=429&width=458&top_left_y=3726&top_left_x=6034) | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=436&width=458&top_left_y=4484&top_left_x=523) | 5 | 目自自自 | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=443&width=487&top_left_y=4473&top_left_x=6005) | | | | | | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=436&width=451&top_left_y=6057&top_left_x=526) | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=821&width=559&top_left_y=5836&top_left_x=3155) | | ![](https://cdn.mathpix.com/cropped/2025_07_28_53d2c1831ad688746c66g-168.jpg?height=429&width=428&top_left_y=6061&top_left_x=6049) | | | 8 | | |
Figure E. 10 -Magnification distortion target
圖 E. 10 -放大失真目標

Figure E. 11 - Ready to use simple magnification distortion target
圖 E. 11 - 準備就緒的簡易放大失真目標
In Figure E.11, the millimetre markers on the nose level, the markers in 20 mm distance on nose and eye level, as well as the resolution targets with one and two line pairs per millimetre are shown.
在圖 E.11 中,顯示了位於鼻子高度的毫米標記、鼻子與眼睛高度間距 20 毫米的標記,以及每毫米一對線條和兩對線條的解析度測試圖。
To precisely and repeatedly measure magnification distortion, fabrication of a rigid magnification distortion target is recommended.
為精確且可重複測量放大失真,建議製作一個剛性的放大失真測試標靶。
To keep the magnification distortion target pointing at the camera it can be necessary to attach the folded target to a support. For target support material it is recommended to use white foamboard with a thickness of 5 mm or 2 mm aluminium. The target support structure construction is also suitable for 3D printing using plastic materials in light colour. Paper based materials should not be used.
為了讓放大畸變標靶能持續對準相機,可能需要將摺疊式標靶固定在支撐架上。建議使用 5 毫米厚的白色發泡板或 2 毫米鋁板作為標靶支撐材料。此標靶支撐結構也適用於使用淺色塑料材料進行 3D 列印。不建議使用紙質材料。
This magnification distortion target should be used to measure magnification distortion in the nose region of the face. The magnification distortion measuring is described in E.2.5.
此放大畸變標靶應用於測量臉部鼻區的放大畸變現象。放大畸變測量方法詳見 E.2.5 節說明。

E.2.3 Radial distortion target construction
E.2.3 徑向畸變標靶結構

A radial distortion target is composed of evenly spaced, horizontal and vertical lines, forming a net-like pattern.
徑向畸變標靶由間距均等的水平線與垂直線組成,形成網狀圖案。
Start building the radial distortion target by cutting the back plane and supporting panels.
開始製作徑向畸變目標,需先切割背板和支撐面板。

This can be done by stacking four foamboard pieces together to form a 50 mm × 160 mm × 20 mm 50 mm × 160 mm × 20 mm 50mmxx160mmxx20mm50 \mathrm{~mm} \times 160 \mathrm{~mm} \times 20 \mathrm{~mm} support slab. See Figure E.12. If thinner aluminium or thicker foamboard is used, then change the dimensions accordingly. Paper based materials should not be used for the support. Foamboard or similar material is recommended for the box material. The size of the visible white board is 200 mm × 200 mm 200 mm × 200 mm 200mmxx200mm200 \mathrm{~mm} \times 200 \mathrm{~mm}. To maintain the base material flatness requirements, use 2 mm aluminium or 5 mm foamboard.
可將四塊發泡板疊在一起,形成 50 mm × 160 mm × 20 mm 50 mm × 160 mm × 20 mm 50mmxx160mmxx20mm50 \mathrm{~mm} \times 160 \mathrm{~mm} \times 20 \mathrm{~mm} 支撐板。詳見圖 E.12。若使用較薄的鋁板或較厚的發泡板,則需相應調整尺寸。紙質材料不適合作為支撐物,建議選用發泡板或類似材質作為盒體材料。可見白板的尺寸為 200 mm × 200 mm 200 mm × 200 mm 200mmxx200mm200 \mathrm{~mm} \times 200 \mathrm{~mm} 。為維持基材平整度要求,請使用 2 毫米鋁板或 5 毫米發泡板。

ISO/IEC 39794-5:2019(E)

Glue the four foamboard pieces together. Use strong glue that does not melt board material. Make sure that the slab has 90 90 90^(@)90^{\circ} corners. In order to keep the slab in correct shape use supports while gluing the slab together piece by piece.
將四塊發泡板黏合。使用不會溶解板材的強力膠水,確保支撐板具有 90 90 90^(@)90^{\circ} 直角。為保持板材正確形狀,在逐片黏合時需使用支撐架固定。

Figure E. 12 - Magnification distortion target support parts and dimensions for 5 mm thick foamboard construction
圖 E.12 - 5 毫米厚發泡板結構的放大畸變目標支撐部件與尺寸
The printed radial distortion target is glued on the back pane foamboard. Figure E. 13 contains the target support board back plane radial distortion target print version for A4 size printing.
將印刷的徑向畸變標靶黏貼在背板泡棉板上。圖 E.13 包含 A4 尺寸印刷用的標靶支撐板背板徑向畸變標靶印刷版本。
Print the target on an A4 paper. Check the size of the grid before cutting. The grid size is 150 mm × 150 mm 150 mm × 150 mm 150mmxx150mm150 \mathrm{~mm} \times 150 \mathrm{~mm} (exact size is 151 , 1 mm × 151 , 1 mm 151 , 1 mm × 151 , 1 mm 151,1mmxx151,1mm151,1 \mathrm{~mm} \times 151,1 \mathrm{~mm} due to the line width in use). Cut out the radial distortion target along the outmost border line so that the border line stays intact or cut out the upper and lower part of the A4 to form a 200 mm × 200 mm 200 mm × 200 mm 200mmxx200mm200 \mathrm{~mm} \times 200 \mathrm{~mm} size paper target. Glue the radial distortion target at the centre of the 210 mm × 210 mm 210 mm × 210 mm 210mmxx210mm210 \mathrm{~mm} \times 210 \mathrm{~mm} foamboard.
將標靶列印在 A4 紙張上。裁剪前請先確認網格尺寸。網格尺寸為 150 mm × 150 mm 150 mm × 150 mm 150mmxx150mm150 \mathrm{~mm} \times 150 \mathrm{~mm} (由於使用線條寬度,實際精確尺寸為 151 , 1 mm × 151 , 1 mm 151 , 1 mm × 151 , 1 mm 151,1mmxx151,1mm151,1 \mathrm{~mm} \times 151,1 \mathrm{~mm} )。沿最外邊框線裁剪徑向畸變標靶,保持邊框線完整;或裁剪 A4 紙張上下部分,形成 200 mm × 200 mm 200 mm × 200 mm 200mmxx200mm200 \mathrm{~mm} \times 200 \mathrm{~mm} 尺寸的紙質標靶。將徑向畸變標靶黏貼於 210 mm × 210 mm 210 mm × 210 mm 210mmxx210mm210 \mathrm{~mm} \times 210 \mathrm{~mm} 泡棉板中央位置。
| | | | | | | | :--- | :--- | :--- | :--- | :--- | :--- | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Figure E. 13 - Radial distortion target version for A4 size printing
圖 E. 13 - A4 尺寸列印用徑向畸變校正圖樣版本
The radial distortion target should be used to measure barrel or pincushion distortion. The radial distortion measuring is described in E.2.6.
徑向失真標靶應使用於測量桶形或枕形失真。徑向失真的測量方法詳見 E.2.6 節。

E.2.4 Target photography
E.2.4 目標攝影

Place the target at the typical location of the subject’s head in the given photographical setup or glue it in a target support board setup. Take a picture of the target in the intended setup, from the chosen distance, with the intended focal length, with the intended aperture.
將目標物放置在攝影設定中主體頭部的典型位置,或將其黏貼在目標支撐板設定上。從選定的距離、使用預定的焦距和光圈,在預設的攝影設定中拍攝目標物的照片。
Target is placed at an appropriate distance from the lens following the guidelines set in this document. Before taking photographs, camera and lights are set following the recommendations of this document.
目標物應依照本文件規範置於鏡頭適當距離處。拍攝前,相機與燈光設置須遵循本文件建議。
It is easy to use a tripod or similar support to keep the target properly aligned. Touch (hook and loop) fasteners may be used to temporarily hold the target for photography. A small patch of fastener tape attached to the target support is able to hold lightweight targets as described in this document.
使用三腳架或類似支撐裝置可輕鬆保持目標物正確對位。拍攝時可暫時使用魔鬼氈固定目標物。如本文件所述,在支撐裝置上黏貼一小塊魔鬼氈膠帶即可固定輕量化目標物。

E.2.5 Magnification distortion measuring
E.2.5 放大倍率失真量測

If the distance between the foremost point of a subject and the optical centre of a standard lens (i.e not telecentric) is D D DD and the height of a structure S 1 S 1 S_(1)S_{1} in the front of the face, e.g., the nose is h s 1 h s 1 h_(s1)\mathrm{h}_{\mathrm{s} 1}, then the camera to subject distance (CSD) is assumed to be Δ D + D Δ D + D Delta D+D\Delta D+D and a structure S 2 S 2 S_(2)S_{2} of the height Δ h s 1 + h s 1 Δ h s 1 + h s 1 Deltah_(s1)+h_(s1)\Delta h_{s 1}+h_{s 1} at eye level, would virtually appear to have the same size as S 1 S 1 S 1 S 1 S_(1)*S_(1)S_{1} \cdot S_{1} virtually seems to be larger than it is in reality due to the magnification distortion. See Figure E.14.
若被攝主體最前端與標準鏡頭(非遠心鏡頭)光學中心的距離為 D D DD ,且面部前方結構(例如鼻子)的高度為 h s 1 h s 1 h_(s1)\mathrm{h}_{\mathrm{s} 1} ,則假設相機與主體距離(CSD)為 Δ D + D Δ D + D Delta D+D\Delta D+D ,此時位於眼睛高度、高度為 Δ h s 1 + h s 1 Δ h s 1 + h s 1 Deltah_(s1)+h_(s1)\Delta h_{s 1}+h_{s 1} 的結構 S 2 S 2 S_(2)S_{2} 在視覺上會呈現與 S 1 S 1 S 1 S 1 S_(1)*S_(1)S_{1} \cdot S_{1} 相同大小,但實際上由於放大畸變會顯得比實物更大。詳見圖 E.14。

Key  關鍵

1 sensor plane  1 感光元件平面
2 sensor  2 感光元件
3 camera  3 相機
4 nose plane with S 1 S 1 S_(1)S_{1} object
4 鼻平面與 S 1 S 1 S_(1)S_{1} 物件

5 eye plane with S 2 S 2 S_(2)S_{2} object
5 眼平面與 S 2 S 2 S_(2)S_{2} 物件

6 optical axis  6 光軸
Figure E. 14 - Illustration of the magnification distortion effect
圖 E.14 - 放大失真效應示意圖
For Figure E.14: ( Δ h s 1 + h s 1 ) / h s 1 = ( Δ D + D ) / D Δ h s 1 + h s 1 / h s 1 = ( Δ D + D ) / D (Deltah_(s1)+h_(s1))//h_(s1)=(Delta D+D)//D\left(\Delta h_{s 1}+h_{s 1}\right) / h_{s 1}=(\Delta D+D) / D. This leads to the definition of the magnification distortion factor:
圖 E.14: ( Δ h s 1 + h s 1 ) / h s 1 = ( Δ D + D ) / D Δ h s 1 + h s 1 / h s 1 = ( Δ D + D ) / D (Deltah_(s1)+h_(s1))//h_(s1)=(Delta D+D)//D\left(\Delta h_{s 1}+h_{s 1}\right) / h_{s 1}=(\Delta D+D) / D 。由此可定義放大失真係數:
K magnification = Δ D / ( D + Δ D ) × 100 % = Δ h s 1 / ( h s 1 + Δ h s 1 ) × 100 % K magnification  = Δ D / ( D + Δ D ) × 100 % = Δ h s 1 / h s 1 + Δ h s 1 × 100 % K_("magnification ")=Delta D//(D+Delta D)xx100%=Deltah_(s1)//(h_(s1)+Deltah_(s1))xx100%K_{\text {magnification }}=\Delta D /(D+\Delta D) \times 100 \%=\Delta h_{\mathrm{s} 1} /\left(h_{\mathrm{s} 1}+\Delta h_{\mathrm{s} 1}\right) \times 100 \%
where  其中
Δ D Δ D Delta D\Delta D is the depth of the measured object,
是測量物件的深度,
D + Δ D D + Δ D D+Delta DD+\Delta D the camera-subject distance,
相機與主體之間的距離,
h s 1 h s 1 h_(s1)h_{s 1} the height of a structure S 1 S 1 S_(1)S_{1} in front of the face.
臉部前方結構 S 1 S 1 S_(1)S_{1} 的高度。
Delta D is the depth of the measured object, D+Delta D the camera-subject distance, h_(s1) the height of a structure S_(1) in front of the face.| $\Delta D$ | is the depth of the measured object, | | :--- | :--- | | $D+\Delta D$ | the camera-subject distance, | | $h_{s 1}$ | the height of a structure $S_{1}$ in front of the face. |
The magnification distortion factor K magnification K magnification  K_("magnification ")K_{\text {magnification }} is the relative enlargement of an object at nose level compared to an object at eye level. The value of Δ D Δ D Delta D\Delta D is assumed to be 50 mm as the typical distance between the nose tip and the eye level of an adult Caucasian. Table E. 1 shows the (computed) absolute enlargement of an object with the size Δ D = 50 mm Δ D = 50 mm Delta D=50mm\Delta D=50 \mathrm{~mm} (like a nose) seen from several distances and the corresponding relative enlargements K magnification K magnification  K_("magnification ")*K_{\text {magnification }} \cdot
放大失真因子 K magnification K magnification  K_("magnification ")K_{\text {magnification }} 是指位於鼻子高度的物體相對於眼睛高度物體的相對放大程度。假設 Δ D Δ D Delta D\Delta D 值為 50 毫米,這是成年白種人鼻尖與眼睛高度的典型距離。表 E.1 顯示了從不同距離觀察時,尺寸為 Δ D = 50 mm Δ D = 50 mm Delta D=50mm\Delta D=50 \mathrm{~mm} 的物體(如鼻子)的(計算)絕對放大值及其對應的相對放大值 K magnification K magnification  K_("magnification ")*K_{\text {magnification }} \cdot
Table E. 1 - Illustration of the magnification distortion effect
表格 E.1 - 放大失真效應示意圖
CSD in mm  CSD 單位為毫米 Δ h s 1 Δ h s 1 Deltah_(s1)\Delta h_{s 1} in mm at h = 5 0 m m h = 5 0 m m h=50mm\boldsymbol{h}=\mathbf{5 0 ~ m m}
Δ h s 1 Δ h s 1 Deltah_(s1)\Delta h_{s 1} 單位為毫米,於 h = 5 0 m m h = 5 0 m m h=50mm\boldsymbol{h}=\mathbf{5 0 ~ m m} 位置
Magnification distortion K magnification K magnification  K_("magnification ")K_{\text {magnification }}
放大失真 K magnification K magnification  K_("magnification ")K_{\text {magnification }}
700 3,57 7,14 %
1000 2,50 5,00 %
1200 2,08 4,17 %
1500 1,67 3,33 %
2500 1,00 2,00 %
3000 0,83 1,67 %
CSD in mm Deltah_(s1) in mm at h=50mm Magnification distortion K_("magnification ") 700 3,57 7,14 % 1000 2,50 5,00 % 1200 2,08 4,17 % 1500 1,67 3,33 % 2500 1,00 2,00 % 3000 0,83 1,67 %| CSD in mm | $\Delta h_{s 1}$ in mm at $\boldsymbol{h}=\mathbf{5 0 ~ m m}$ | Magnification distortion $K_{\text {magnification }}$ | | :--- | :--- | :--- | | 700 | 3,57 | 7,14 % | | 1000 | 2,50 | 5,00 % | | 1200 | 2,08 | 4,17 % | | 1500 | 1,67 | 3,33 % | | 2500 | 1,00 | 2,00 % | | 3000 | 0,83 | 1,67 % |
Figure E. 15 shows details of the photograph of the magnification distortion target taken in the intended photographic setup, from the chosen distance, with the intended focal length, and with the intended aperture. In order to determine the magnification distortion, measure a size h s 1 + Δ h s 1 h s 1 + Δ h s 1 h_(s1)+Deltah_(s1)h_{s 1}+\Delta h_{s 1} at nose-tip level and the difference to the corresponding size at eye level Δ h s 1 Δ h s 1 Deltah_(s1)\Delta h_{s 1}, and calculate
圖 E.15 顯示了在預設攝影配置下,從選定距離、使用預設焦距與預設光圈所拍攝的放大畸變測試圖像細節。為測定放大畸變程度,需測量鼻尖高度處的尺寸 h s 1 + Δ h s 1 h s 1 + Δ h s 1 h_(s1)+Deltah_(s1)h_{s 1}+\Delta h_{s 1} 與眼部高度處對應尺寸 Δ h s 1 Δ h s 1 Deltah_(s1)\Delta h_{s 1} 的差值,並進行計算
K magnification = Δ h s 1 / ( h s 1 + Δ h s 1 ) × 100 % K magnification = Δ h s 1 h s 1 + Δ h s 1 × 100 % K magnification  = Δ h s 1 / h s 1 + Δ h s 1 × 100 % K magnification  = Δ h s 1 h s 1 + Δ h s 1 × 100 % K_("magnification ")=Deltah_(s1)//(h_(s1)+Deltah_(s1))xx100%K_("magnification ")=(Deltah_(s1))/(h_(s1)+Deltah_(s1))xx100%K_{\text {magnification }}=\Delta h_{s 1} /\left(h_{s 1}+\Delta h_{s 1}\right) \times 100 \% \mathrm{~K}_{\text {magnification }}=\frac{\Delta \mathrm{h}_{\mathrm{s} 1}}{\mathrm{~h}_{\mathrm{s} 1}+\Delta \mathrm{h}_{\mathrm{s} 1}} \times 100 \%
Using the magnification distortion target from Figure E.10, the following experimental data have been collected. Figure E. 15 shows details from the captured image, Table E. 2 shows the measured and computed results. Note that the observations almost exactly match the computations from Table E.1.
使用圖 E.10 的放大失真目標,已收集以下實驗數據。圖 E.15 顯示了擷取影像的細節,表 E.2 則呈現測量與計算結果。請注意,觀測值幾乎完全吻合表 E.1 的計算值。

Figure E. 15 - Example measurement of the magnification distortion effect
圖 E. 15 - 放大失真效應測量範例
Table E. 2 - Experimental measures
表 E. 2 - 實驗測量值
CSD in mm  CSD 以毫米為單位 Target size h s 1 + Δ h s 1 i h s 1 + Δ h s 1 i h_(s1)+Deltah_(s1i)h_{s 1}+\Delta h_{s 1 i} in mm (and pixels)
目標尺寸 h s 1 + Δ h s 1 i h s 1 + Δ h s 1 i h_(s1)+Deltah_(s1i)h_{s 1}+\Delta h_{s 1 i} 以毫米為單位(及像素)
h s 1 h s 1 h_(s1)h_{s 1} in mm (and pixels)
h s 1 h s 1 h_(s1)h_{s 1} 以毫米為單位(及像素)
Δ h s 1 Δ h s 1 Deltah_(s1)\Delta h_{s 1} in mm   Δ h s 1 Δ h s 1 Deltah_(s1)\Delta h_{s 1} 以毫米為單位 K magnification = K magnification  = K_("magnification ")=K_{\text {magnification }}= Δ h s 1 / ( h s 1 + Δ h s 1 ) × 100 % Δ h s 1 / h s 1 + Δ h s 1 × 100 % Deltah_(s1)//(h_(s1)+Deltah_(s1))xx100%\Delta h_{s 1} /\left(h_{s 1}+\Delta h_{s 1}\right) \times 100 \%
750 80 (3216) 74,7 (3004) 5,3 6,6 %
1050 80 (2631) 76,3 (2508) 3,7 4,6 %
1250 80 (2160) 76,8 (2073) 3,2 4,0 %
1550 80 (1742) 77,4 (1686) 2,6 3,2 %
CSD in mm Target size h_(s1)+Deltah_(s1i) in mm (and pixels) h_(s1) in mm (and pixels) Deltah_(s1) in mm K_("magnification ")= Deltah_(s1)//(h_(s1)+Deltah_(s1))xx100% 750 80 (3216) 74,7 (3004) 5,3 6,6 % 1050 80 (2631) 76,3 (2508) 3,7 4,6 % 1250 80 (2160) 76,8 (2073) 3,2 4,0 % 1550 80 (1742) 77,4 (1686) 2,6 3,2 %| CSD in mm | Target size $h_{s 1}+\Delta h_{s 1 i}$ in mm (and pixels) | $h_{s 1}$ in mm (and pixels) | $\Delta h_{s 1}$ in mm | $K_{\text {magnification }}=$ $\Delta h_{s 1} /\left(h_{s 1}+\Delta h_{s 1}\right) \times 100 \%$ | | :--- | :--- | :--- | :--- | :--- | | 750 | 80 (3216) | 74,7 (3004) | 5,3 | 6,6 % | | 1050 | 80 (2631) | 76,3 (2508) | 3,7 | 4,6 % | | 1250 | 80 (2160) | 76,8 (2073) | 3,2 | 4,0 % | | 1550 | 80 (1742) | 77,4 (1686) | 2,6 | 3,2 % |

E.2.6 Radial distortion measuring
E.2.6 徑向失真測量

Radial distortion of the camera lens causes information about the object to be misplaced but not lost. Camera lens measurement related radial distortion measurement is important as some of the ABC and photo kiosk cameras are not using highest quality lenses. For focal lengths shorter than about 30 mm , the distortion seems dominantly barrel, fish eye lens being the ultimate example. Cell phone cameras and fixed focus lenses have short focal lengths.
相機鏡頭的徑向失真會導致物件的資訊位置偏移但不會遺失。由於部分 ABC 與證件快照機並未採用最高品質的鏡頭,進行相機鏡頭測量時,相關的徑向失真測量就顯得格外重要。當焦距短於 30 毫米左右時,失真主要表現為桶狀變形,魚眼鏡頭就是最極端的例子。手機相機與定焦鏡頭通常具有較短的焦距。
In practice measurements of the radial distortions are done either manually or using measurement software. If doing the measurements manually then it is important to zoom in the measured area and to use an image processing software ruler tool for measurements. In Figure E.16, the printed grid is marked above in light blue to show the recommended alignment and size when compared to a face portrait. Measurement vector starts from the grid corner point pixel location and ends at the respective corner point of the grid. The green circles are for reference purposes only to show how the recognition program may set certain landmarks on the actual face portrait. Guidance for the measurements is shown in Figure E.17.
在實際操作中,徑向畸變的測量可以手動進行或使用測量軟體完成。若採用手動測量,務必放大待測區域並使用影像處理軟體的標尺工具進行量測。圖 E.16 中,印刷格網以淺藍色標示於上方,顯示與人像照片對照時建議的對齊方式與尺寸。測量向量起始於格網角點像素位置,終止於格網對應角點。綠色圓圈僅供參考,用以說明辨識程式如何在實際人像照片上設定特定特徵點。測量指引詳見圖 E.17 所示。

Figure E. 16 - Measurement grid example
圖 E. 16 - 測量網格範例
Radial distortion is a type of geometrical aberration that causes a difference in magnification of the object at different points in the image. Various points are misplaced relative to the central point of the image. The barrel or pincushion distortion K radial K radial  K_("radial ")K_{\text {radial }} is calculated by:
徑向畸變是一種幾何像差,會導致影像中不同位置的物件放大率產生差異。各點相對於影像中心點的位置會發生偏移。桶形或枕形畸變 K radial K radial  K_("radial ")K_{\text {radial }} 的計算公式為:
K radial = ( P D A D ) / P D × 100 K radial  = ( P D A D ) / P D × 100 K_("radial ")=(PD-AD)//PD xx100K_{\text {radial }}=(P D-A D) / P D \times 100
where  哪裡
A D A D ADA D is the actual distance and
A D A D ADA D 是實際距離

P D P D PDP D is the photographic distance from the centre point of the image.
P D P D PDP D 是從影像中心點算起的攝影距離

Figure E. 17 gives an example of barrel distortion non-distorted and distorted frame intersection points shown with dots and arrows. Use guidelines to find the corner points of the undistorted image shown here as a grid drawn through the marked points with black lines.
圖 E.17 以點狀標記與箭頭展示了桶狀變形與未變形畫框交點的範例。使用輔助線來找出未變形影像的角落點,此處以黑色線條穿過標記點繪製的網格表示。

Figure E. 17 - Barrel distortion explained by a non-distorted and a distorted frame
圖 E.17 - 以未變形與變形畫框說明桶狀變形現象
Distortion, represented by a percentage, may be either positive or negative. A positive percentage represents pincushion distortion, whereas a negative percentage represents barrel distortion.
失真度以百分比表示,可以是正值或負值。正值代表枕形失真,而負值則代表桶形失真。
Figure E. 18 illustrates barrel and pincushion distortion compared to an ideal grid picture in a perfectly square non-distorted image.
圖 E.18 展示了相對於完美正方形無失真圖像中的理想網格圖,桶形失真與枕形失真的比較。

Key  圖例
1 barrel distortion  1 桶形失真
2 pincushion distortion  2 枕形失真
3 non-distorted target  3 無失真目標
Figure E. 18 - Barrel and pincushion distortion
圖 E. 18 - 桶形失真與枕形失真
In practice it is difficult to define the exact location of the non-distorted target location on the distorted photographic image. By drawing a cross through the middle point of the target image it is possible to find four intersecting points, which in turn may be used to locate the non-distorted target frame for error calculation purposes. This makes the measurements of different systems compatible, but the values are not absolute as the real distortion zooming error behaviour of different lenses are not the same.
實際上很難在失真的攝影圖像上準確定義無失真目標的位置。通過在目標圖像的中點繪製十字線,可以找到四個交點,這些交點可用於定位無失真目標框架以進行誤差計算。這使得不同系統的測量結果具有可比性,但由於不同鏡頭的實際失真變焦誤差行為並不相同,因此這些數值並非絕對值。

E. 3 Colour test
E. 3 色彩測試

E.3.1 Colour tests according to ISO/CIE 11664-4
E.3.1 依據 ISO/CIE 11664-4 標準之色度測試

The IEC 61966-8 [ 11 ] [ 11 ] ^([11]){ }^{[11]}, ANSI IT8.7/2 [ 12 ] [ 12 ] ^([12]){ }^{[12]} or similar ISO 12641-1 [ 13 ] [ 13 ] ^([13]){ }^{[13]} test chart or other compatible and well documented colour test chart (e.g., IEC 61966-8 or ANSI IT8.7/2, see Figures E. 19 and E.20) containing skin colour patches should be used to measure colour accuracy, and dynamic range (indirect method). Colour quality is determined by converting the image to CIE L a b [ 15 ] L a b [ 15 ] L^(**)a^(**)b^(**[15])L^{*} a^{*} b^{*[15]} space and measuring the distance between the colours captured and known colour values of the chart patches.
應使用 IEC 61966-8 [ 11 ] [ 11 ] ^([11]){ }^{[11]} 、ANSI IT8.7/2 [ 12 ] [ 12 ] ^([12]){ }^{[12]} 或類似 ISO 12641-1 [ 13 ] [ 13 ] ^([13]){ }^{[13]} 測試圖表,或其他相容且文件完善的色彩測試圖表(例如 IEC 61966-8 或 ANSI IT8.7/2,參見圖 E.19 和 E.20),其中包含膚色色塊來測量色彩準確度與動態範圍(間接方法)。色彩品質的判定方式是將影像轉換至 CIE L a b [ 15 ] L a b [ 15 ] L^(**)a^(**)b^(**[15])L^{*} a^{*} b^{*[15]} 色彩空間,並測量所捕捉色彩與圖表色塊已知色彩值之間的距離。

Figure E. 19 - IEC 61966-8 colour test chart
圖 E. 19 - IEC 61966-8 色彩測試圖表
Patterns with sufficiently large patches are needed to measure noise (e. g., ISO-15739, ISO-14524). See Figure E.21.
需要具有足夠大斑塊的圖案來測量噪聲(例如 ISO-15739、ISO-14524)。參見圖 E.21。

Figure E. 20 - ANSI IT8.7/2 test chart
圖 E.20 - ANSI IT8.7/2 測試圖表

E.3.2 Image colour quality
E.3.2 影像色彩品質

Human examiners and face recognition systems rely on high-quality skin tone presentation. Colour is a subjective psychological phenomenon, and human perception of colour depends on the context in which a perceived object is presented (i.e., chromatic adaptation). Therefore, the colour test should measure the entire gamut, as an examiner needs the surrounding colours to perceive colours in portions of the face (e.g., lips, hair, eyes, makeup) correctly.
人類檢驗員與人臉辨識系統都依賴高品質的膚色呈現。色彩是一種主觀的心理現象,人類對色彩的感知取決於被感知物件所處的環境(即色適應)。因此,色彩測試應測量整個色域,因為檢驗員需要周圍色彩才能正確感知臉部各部位(例如嘴唇、頭髮、眼睛、妝容)的色彩。
Cameras should be white balanced and scanners colour managed to ensure high fidelity colour reproduction across the entire gamut. This is necessary as the digital camera software is preprocessing the internal raw image format and may distort the image colour when JPEG is used as source format for image analysis.
攝影機應進行白平衡校正,掃描器則需進行色彩管理,以確保在整個色域範圍內都能實現高保真的色彩重現。此步驟至關重要,因為數位相機軟體會對內部原始影像格式進行預處理,當採用 JPEG 作為影像分析來源格式時,可能導致色彩失真。
In order to achieve good sample fidelity, there shall be no saturation (e.g., over- or under-exposure) on the measurement target. All RGB channels of the image should have at least 7 bits of intensity variation (i.e., span a range of at least 128 unique values) in the test target patch region of the image. This is required to get as near as possible to a L L L^(**)L^{*} level of 50 , which in turn ensures a wide sRGB [ 11 ] [ 11 ] ^([11]){ }^{[11]} gamut will be available for the analysis.
為確保樣本保真度,測量目標不得出現飽和現象(例如過度曝光或曝光不足)。影像測試目標區域的所有 RGB 通道,其強度變化至少應具備 7 位元(即涵蓋至少 128 個獨特數值的範圍)。此要求旨在使 L L L^(**)L^{*} 等級盡可能接近 50,進而確保分析時能獲得寬廣的 sRGB [ 11 ] [ 11 ] ^([11]){ }^{[11]} 色域。

Figure E. 21 - Standard test patterns with sufficiently large patches to measure signal to noise
圖 E.21 - 具備足夠大色塊的標準測試圖案,用於測量信噪比

E.3.3 Measurements and analysis
E.3.3 量測與分析

In order to assure the required image quality system, installers shall make quality assurance measurements of light conditions and camera system response when a recommended CIE Standard Illuminant D65 high quality illuminant or similar continuous spectrum daylight illuminant and camera including camera control software are used to take pictures. In practice, it is also required to reduce the ambient light pollution emanating from uncontrolled daylight sources, fluorescent or similar light sources and reflections from surfaces.
為確保所需的影像品質系統,安裝人員應在採用推薦的 CIE 標準光源 D65 高品質光源或類似連續光譜日光光源,以及包含相機控制軟體的相機系統進行拍攝時,對光照條件與相機系統回應進行品質保證量測。實務上,還需降低來自未受控日光光源、螢光燈或類似光源,以及表面反射所產生的環境光污染。
Face portrait images are typically stored in sRGB IEC 61966-2-1[16], a device-independent colour space designed for consistent display across a wide range of commercial devices. However equal distances in spaces like sRGB do not represent equally perceptible differences between colour stimuli. To address this, in 1976 CIE created the LAB colour space whose coordinate system is based on nonlinear transformations which attempt to capture perceptual distance. When computing colour error, images are typically converted from the sRGB colour space to L a b [ 15 ] L a b [ 15 ] L^(**)a^(**)b^(**[15])\mathrm{L}^{*} \mathrm{a}^{*} \mathrm{~b}^{*[15]}, a colour space engineered to approximate the way the human visual system perceives colour. Algorithmically, the transformation from sRGB to L a b L a b L^(**)a^(**)b^(**)\mathrm{L}^{*} \mathrm{a}^{*} \mathrm{~b}^{*} values is accomplished by taking advantage of colour-matching functions developed for the CIE 1931 standard colourimetric system.
人臉肖像圖像通常儲存於 sRGB IEC 61966-2-1[16]色彩空間,這是一種設備無關的色彩空間,專為在各種商用設備上保持顯示一致性而設計。然而在 sRGB 等色彩空間中,相等的距離並不能代表色彩刺激間可感知的差異程度。為解決此問題,國際照明委員會(CIE)於 1976 年創建了 LAB 色彩空間,其座標系統基於非線性轉換,旨在捕捉人類感知距離。在計算色彩誤差時,圖像通常會從 sRGB 色彩空間轉換至 L a b [ 15 ] L a b [ 15 ] L^(**)a^(**)b^(**[15])\mathrm{L}^{*} \mathrm{a}^{*} \mathrm{~b}^{*[15]} ,這種色彩空間是為近似模擬人類視覺系統感知色彩的方式而設計。從演算法角度來看,利用為 CIE 1931 標準色度系統開發的色彩匹配函數,可實現從 sRGB 到 L a b L a b L^(**)a^(**)b^(**)\mathrm{L}^{*} \mathrm{a}^{*} \mathrm{~b}^{*} 數值的轉換。
Fixed registration office imaging systems should be calibrated using manual or automated methods described in this document. For photo booth and mobile registration office imaging automatic white balance setting procedures and automatic quality analysis should be used. In a mobile environment, the use of advanced manual measurements may fail due to time and user training constraints. However, face portraits shall not be captured without adequate colour balance.
固定式註冊辦公室影像系統應使用本文件所述的手動或自動方法進行校準。對於照相亭和行動註冊辦公室影像,應採用自動白平衡設定程序與自動品質分析。在行動裝置環境中,由於時間和使用者訓練限制,可能無法使用進階手動測量。然而,不得在未達到適當色彩平衡的情況下拍攝人像照片。
Variations in human skin colour have been measured using visible reflectance spectroscopy and the device-independent colour space (CIELAB) [ 25 ] [ 25 ] ^([25]){ }^{[25]} [26]. Skin colour values can be expressed along the three dimensions of the CIELAB colour space: lightness scaled from 0 (black) to 100 (white) along L L L^(**)\mathrm{L}^{*}, and the opponent colour axes a a a^(**)\mathrm{a}^{*} and b b b^(**)\mathrm{b}^{*} representing from red through green from positive to negative values along a a a^(**)\mathrm{a}^{*} and similarly yellow through blue along b b b^(**)\mathrm{b}^{*}.
人類膚色差異已使用可見光反射光譜儀與設備無關的色彩空間(CIELAB) [ 25 ] [ 25 ] ^([25]){ }^{[25]} [26]進行測量。膚色值可沿 CIELAB 色彩空間的三個維度表示:明度從 0(黑)到 100(白)沿 L L L^(**)\mathrm{L}^{*} 刻度,以及對立色彩軸 a a a^(**)\mathrm{a}^{*} b b b^(**)\mathrm{b}^{*} 分別代表從紅到綠(沿 a a a^(**)\mathrm{a}^{*} 從正值到負值)與從黃到藍(沿 b b b^(**)\mathrm{b}^{*} 類似變化)。
NOTE 1 When the face image is expressed in sRGB colour space, then the gamut shrinks and moves upwards (in a b a b a^(**)b^(**)\mathrm{a}^{*} \mathrm{~b}^{*} positive direction), and therefore higher L L L^(**)\mathrm{L}^{*} values may produce higher a a a^(**)\mathrm{a}^{*} and b b b^(**)\mathrm{b}^{*} values than shown in research papers.
註 1 當臉部影像以 sRGB 色彩空間表示時,色域會縮小並向上移動(往 a b a b a^(**)b^(**)\mathrm{a}^{*} \mathrm{~b}^{*} 正向方向),因此較高的 L L L^(**)\mathrm{L}^{*} 值可能產生比研究論文所示更高的 a a a^(**)\mathrm{a}^{*} b b b^(**)\mathrm{b}^{*} 值。
In a study[25] that used the above technique to measure the variation in the skin colour on the cheeks and foreheads of 960 Caucasian, Chinese, Kurdish, and Thai individuals, the expected mean and standard deviations of the variations found was as follows: L L L^(**)L^{*} mean 58,21 sigma 4,23 ; a* mean 11,45 sigma; 2,38 and b b b^(**)b^{*} mean 15,91 and sigma 2 . However, as the study population was not representative of the variation in the skin across all ethnicities, wider variations in skin colour should be expected. As
在一項研究[25]中,使用上述技術測量了 960 位高加索人、中國人、庫德人和泰國人臉頰與前額膚色變異,發現的變異預期平均值與標準差如下: L L L^(**)L^{*} 平均值 58.21、標準差 4.23;a*平均值 11.45、標準差 2.38; b b b^(**)b^{*} 平均值 15.91、標準差 2。然而由於研究群體並未涵蓋所有種族的膚色變異,預期實際膚色變異範圍應更廣泛。

ISO/IEC 39794-5:2019(E)

studies[25][26] have shown that values lower than 5 for a a a^(**)\mathrm{a}^{*} and 10 for b b b^(**)\mathrm{b}^{*} have not been measured for face skin colours, these limits can be used to provide a warning of possible problems in captured skin colour.
研究[25][26]顯示,對於臉部膚色而言,低於 5 的 a a a^(**)\mathrm{a}^{*} 值和低於 10 的 b b b^(**)\mathrm{b}^{*} 值尚未被測量到,這些限制可用於對捕捉到的膚色可能存在的問題發出警告。
NOTE 2 These studies did not take into account the possible impact on skin colour due to dermatological conditions.
註 2 這些研究並未考量皮膚病變可能對膚色造成的影響。

E.3.4 Coordinate calculation
E.3.4 座標計算

Figure E. 22 - CIE chroma (ab) luminance (L*) levels 50 and 75 show the sRGB [ 11 ] [ 11 ] ^([11]){ }^{[11]} gamut compared to the entire a b a b a^(**)b^(**)\mathbf{a}^{*} \mathbf{b}^{*} area
圖 E.22 - CIE 色度(ab)亮度(L*)50 級與 75 級顯示 sRGB [ 11 ] [ 11 ] ^([11]){ }^{[11]} 色域與整個 a b a b a^(**)b^(**)\mathbf{a}^{*} \mathbf{b}^{*} 區域的比較
Human skin tones should be located in the upper right hand sector of Figure E.22.
人體膚色應位於圖 E.22 的右上象限區域

In CIE Lab* the non-linear relations for L*, a* and b* are intended to mimic the logarithmic response of the human eye [ 14 ] [ 14 ] ^([14]){ }^{[14]}. Colour quality is determined by converting the image to CIE L a b [ 15 ] CIE L a b [ 15 ] CIEL^(**)a^(**)b^([15])\operatorname{CIE} \mathrm{L}^{*} \mathrm{a}^{*} \mathrm{~b}^{[15]} space and measuring the distance between the colours captured and known colour values of the chart patches. CIE Delta E 2000[23] is a standard method for measuring this distance. A capture system’s performance can be improved by minimizing the system’s average (i.e., measured across all chart patches) and maximum (i.e., measured for any given chart patch) Delta E 2000.
在 CIE Lab*色彩空間中,L*、a*和 b*的非線性關係旨在模擬人眼對數響應特性 [ 14 ] [ 14 ] ^([14]){ }^{[14]} 。色彩品質的評估方式是將圖像轉換至 CIE L a b [ 15 ] CIE L a b [ 15 ] CIEL^(**)a^(**)b^([15])\operatorname{CIE} \mathrm{L}^{*} \mathrm{a}^{*} \mathrm{~b}^{[15]} 色彩空間,並量測所捕捉色彩與色卡標準色值之間的距離。CIE Delta E 2000[23]是量測此距離的標準方法。透過降低系統的平均 Delta E 2000 值(即所有色卡量測結果的平均值)與最大 Delta E 2000 值(即任一色卡的量測結果),可提升捕捉系統的表現。
An ideal system would have an average deltaE2000 of 1 and a maximum deltaE2000 of 5 .
理想的系統應具備平均 deltaE2000 值為 1,最大 deltaE2000 值為 5。

E. 4 MTF test method according to ISO 12233:2014
E.4 依據 ISO 12233:2014 標準之 MTF 測試方法

Spatial resolution is a measure of the smallest discernible detail in an image. The image resolution measurements described herein are designed for the calibration of photo studios and office imaging systems. The methods generally involve photographing or scanning a standard target and analysing the resulting images on a computer using standardized algorithms to compute a value.
空間解析度是用以衡量影像中最細微可辨識細節的指標。本文所述之影像解析度測量方法專為攝影棚與辦公室成像系統校準所設計。該方法通常涉及拍攝或掃描標準測試圖表,並透過標準化演算法在電腦上分析產生的影像以計算數值。
Image fidelity factors are affected by the imaging sensor and lens. Resolution is a single frequency parameter that indicates whether the output signal contains a minimum threshold of detail information for visual detection (i.e., the highest spatial frequency that a camera or other similar imaging device can usefully capture).
影像保真度因素會受到成像感測器與鏡頭影響。解析度為單一頻率參數,用於判斷輸出信號是否含有視覺檢測所需的最低細節資訊閾值(亦即相機或其他類似成像設備能有效捕捉的最高空間頻率)。

Figure E. 23 - Relation between the physical size of an object at the face and its counterpart at the camera sensor
圖 E.23 - 臉部物件的實際大小與其在相機感測器上對應尺寸的關係
Spatial frequency response is a multi-valued metric that measures contrast loss as a function of spatial frequency. Generally, contrast decreases as a function of spatial frequency to a level where detail can no longer be visually resolved. See Figure E.23. This limiting frequency value is the resolution of the camera, which is determined by the performance of the camera lens, the number of addressable photo elements in the optical imaging device, and the electrical circuits in the camera, which optionally perform image compression and gamma correction.
空間頻率響應是一種多值指標,用於測量對比度損失隨空間頻率變化的情況。一般而言,對比度會隨著空間頻率增加而降低,直至細節無法被視覺辨識的程度。參見圖 E.23。此極限頻率值即為相機的解析度,其取決於相機鏡頭性能、光學成像裝置中可定址光電元件的數量,以及相機內可能執行影像壓縮與伽瑪校正的電子電路。

Figure E. 24 - Sine wave test chart overlaid with a white contrast mask to show decreasing contrast effects from 100 % 100 % 100%100 \% value at the top to a 0 % 0 % 0%0 \% value at the bottom
圖 E.24 - 疊加白色對比遮罩的正弦波測試圖表,顯示對比效果從頂部 100 % 100 % 100%100 \% 值到底部 0 % 0 % 0%0 \% 值的遞減現象
Modulation transform functions (MTFs) are normally measured by optics experts using purposeprinted sine wave test charts, such as in Figure E.24, and standardized procedures. Alternatively, MTF can be determined from the magnitude of the Fourier transform of a system’s point or line spread function. The Fourier transform decomposes the spread function into the frequencies that comprise it. By looking at the amplitudes of the frequencies, it is possible to define the resolution characteristics of the measured imaging system.
調制轉換函數(MTF)通常由光學專家使用專門印製的正弦波測試圖(如圖 E.24 所示)及標準化程序進行測量。此外,亦可透過系統點擴散函數或線擴散函數的傅立葉轉換幅度來確定 MTF。傅立葉轉換會將擴散函數分解為其組成頻率,透過觀察這些頻率的振幅,即可定義被測成像系統的解析度特性。

Figure E. 25 - Test pattern: ISO 12233:2014
圖 E.25 - 測試圖案:ISO 12233:2014
ISO 12233:2014 specifies methods for measuring the spatial frequency response (SFR) of electronic still picture cameras and similar imaging devices such as video cameras and flatbed scanners. The SFR measurement closely approximates the mathematically-defined system MTF of the camera. The MTF of the camera can only be approximated through the SFR, because most electronic still-picture cameras provide spatial colour sampling and nonlinear processing.
ISO 12233:2014 規範了測量電子靜態相機及類似成像設備(如攝影機和平板掃描器)空間頻率響應(SFR)的方法。SFR 測量值非常接近相機系統在數學定義上的 MTF。由於多數電子靜態相機具有空間色彩採樣和非線性處理特性,相機的 MTF 僅能透過 SFR 進行近似估算。
SFR is measured by capturing an image of a bi-tonal, rotated square test pattern, such as those shown in Figure E. 25 or E.26, which can subsequently be analysed using readily-available edge analysis image processing software. It is important that the test pattern is captured in the same photographic environment that is prescribed for the face by this document, including camera-to-subject distance, image dimensions, etc.
SFR 的測量方式是透過拍攝雙色調、旋轉方形測試圖案的影像來進行,例如圖 E.25 或 E.26 所示的圖案,這些影像後續可使用現成的邊緣分析影像處理軟體進行分析。重要的是,測試圖案的拍攝環境必須與本文件規定的人臉拍攝環境相同,包括相機到被攝物的距離、影像尺寸等。
To perform slanted edge measurements using a software program, the user selects a region containing the edge to be measured. The software processes digital image values near slanted vertical and horizontal black to white edges to derive super sampled edge spread data, which is then filtered and converted to the frequency domain to get the SFR values. Horizontal edge is used for vertical SFR measurement.
在使用軟體程式進行斜邊測量時,使用者需選取包含待測邊緣的區域。該軟體會處理斜向垂直與水平黑白邊緣附近的數位影像數值,以取得超取樣邊緣擴散數據,這些數據經過濾波後會轉換至頻域,從而獲得 SFR 值。水平邊緣用於垂直 SFR 測量。
Slanted edge measurements are less sensitive to noise than sine patterns. Gamma influences the MTF measurement accuracy, and for this reason, gamma value should be measured using a grey scale chart. Gamma correction, or often simply gamma, is the name of a nonlinear operation used to encode and decode luminance or colour related tristimulus values in imaging systems. An incorrect gamma setting for the MTF calculation causes an error situation where the MTF at 2 * Nyquist is not equal to 0 , as it should be. In practice, a measurement of exactly 0 is not required to achieve an acceptable measurement.
斜邊測量對雜訊的敏感度低於正弦波圖案。伽瑪值會影響 MTF 測量精度,因此應使用灰階圖表來測量伽瑪值。伽瑪校正(通常簡稱伽瑪)是一種非線性運算,用於在成像系統中對亮度或色彩相關的三刺激值進行編碼與解碼。若 MTF 計算採用錯誤的伽瑪設定,會導致在 2 倍奈奎斯特頻率處的 MTF 值不等於應有的 0 值,從而產生誤差。實務上,測量值無需精確為 0 即可達到可接受的測量結果。

Figure E. 26 - Test pattern: ISO 16067-1
圖 E.26 - 測試圖案:ISO 16067-1

Note that the physical distance between the fiducials on the standardized test pattern is 66 , 8 mm 66 , 8 mm 66,8mm66,8 \mathrm{~mm}.
請注意,標準化測試圖案上基準點之間的實際距離為 66 , 8 mm 66 , 8 mm 66,8mm66,8 \mathrm{~mm}

An example SFR measured for an imaging system is shown in Figure E.27. The contrast value is depicted either in percent or on a scale from 0 to 1 , where 1 corresponds to 100 % 100 % 100%100 \% on the vertical axis, and frequency values start from 0 and increase to the right on the horizontal axis. The frequency unit of an SFR measurement can be represented in cy / px cy / px cy//px\mathrm{cy} / \mathrm{px} or cy / mm cy / mm cy//mm\mathrm{cy} / \mathrm{mm}. For the purposes of this document, where the system specification references the object being photographed (i. e., the size of features on a face), the frequency unit should be in cycles / mm / mm //mm/ \mathrm{mm} in the object plane. The size of a freckle/mole that should be detectable in face photos is 2 mm to 3 mm . The unit cy/ mm is preferred over cy/px as sensor dimensions, camera distances, and head sizes vary. The physical distance of the object is the universal unit for all issuers. This measurement methodology requires that the user compute the sampling rate in px / mm px / mm px//mm\mathrm{px} / \mathrm{mm}, which is determined by measuring the number of pixels across the known physical dimensions of the test pattern.
一個成像系統測量出的 SFR 範例如圖 E.27 所示。對比值可以百分比或 0 到 1 的刻度表示,其中 1 對應垂直軸上的 100 % 100 % 100%100 \% ,頻率值則從 0 開始並沿水平軸向右遞增。SFR 測量的頻率單位可用 cy / px cy / px cy//px\mathrm{cy} / \mathrm{px} cy / mm cy / mm cy//mm\mathrm{cy} / \mathrm{mm} 表示。本文件規範中,當系統規格涉及拍攝物件(例如臉部特徵大小)時,頻率單位應採用物件平面上的週期數 / mm / mm //mm/ \mathrm{mm} 。臉部照片中應可偵測到的雀斑/痣尺寸為 2 毫米至 3 毫米。由於感測器尺寸、攝影距離與頭部大小各異,cy/mm 單位比 cy/px 更為適用。物件的實際距離是所有發行機構的通用單位。此測量方法要求使用者計算 px / mm px / mm px//mm\mathrm{px} / \mathrm{mm} 中的取樣率,該數值需透過測量測試圖案已知物理尺寸所對應的像素數量來決定。

Key
1 Nyquist  關鍵詞 1 奈奎斯特

Figure E. 27 - Example SFR graph
圖 E.27 - SFR 圖表示例
The Nyquist frequency is half of the sampling rate of a discrete (opposite of continuous) signal processing system. The Nyquist frequency is also called the Nyquist limit. It is the highest frequency that can be encoded at a given sampling rate while still fully reconstructing the image. The Nyquist frequency for an imaging system is 0 , 5 cy / px 0 , 5 cy / px 0,5cy//px0,5 \mathrm{cy} / \mathrm{px} as two samples (i.e., pixels) is the minimum required to represent a complete cycle. However, cy/px does not clarify the size of the object being resolved and must therefore be converted to cy/mm.
奈奎斯特頻率是離散(與連續相反)信號處理系統取樣率的一半。奈奎斯特頻率也稱為奈奎斯特極限,它是在給定取樣率下仍能完整重建影像時可編碼的最高頻率。對於成像系統而言,奈奎斯特頻率為 0 , 5 cy / px 0 , 5 cy / px 0,5cy//px0,5 \mathrm{cy} / \mathrm{px} ,因為兩個樣本(即像素)是表示完整週期所需的最小值。然而,cy/px 並未明確說明所解析物件的大小,因此必須轉換為 cy/mm。
The SFR is deemed acceptable if the MTF remains sufficiently high up to a specified frequency. The Nyquist limit for the specification is determined by the minimum sampling rate specified in this document multiplied by the Nyquist limit in cy/px. A different specification is needed for scanned photographs versus live capture, as the object plane in each case is different. The minimum sampling rates in this document are: 90 pixels per inter-eye distance (approximately 60 mm ) for cameras, and 300 PPI for scanners. The Nyquist limits are therefore: 0 , 75 cy / mm 0 , 75 cy / mm 0,75cy//mm0,75 \mathrm{cy} / \mathrm{mm} for cameras and 5 , 9 cy / mm 5 , 9 cy / mm 5,9cy//mm5,9 \mathrm{cy} / \mathrm{mm} for scanners.
若 MTF 在達到指定頻率前仍保持足夠高的值,則認為 SFR 是可接受的。該規範的奈奎斯特極限由本文件規定的最小取樣率乘以 cy/px 中的奈奎斯特極限所決定。掃描照片與即時捕捉需要不同的規範,因為每種情況下的物件平面都不同。本文件規定的最小取樣率為:相機每眼距(約 60 毫米)90 像素,掃描器則為 300 PPI。因此,奈奎斯特極限分別為:相機 0 , 75 cy / mm 0 , 75 cy / mm 0,75cy//mm0,75 \mathrm{cy} / \mathrm{mm} ,掃描器 5 , 9 cy / mm 5 , 9 cy / mm 5,9cy//mm5,9 \mathrm{cy} / \mathrm{mm}
MTF20, which is an indicator of SFR at higher spatial frequencies, should occur at approximately 80 % 80 % 80%80 \% of the Nyquist frequency, or 0 , 6 cy / mm 0 , 6 cy / mm 0,6cy//mm0,6 \mathrm{cy} / \mathrm{mm} for cameras and 4 , 7 cy / mm 4 , 7 cy / mm 4,7cy//mm4,7 \mathrm{cy} / \mathrm{mm} for scanners.
MTF20(調製傳遞函數 20)作為高空間頻率下的 SFR 指標,其數值應出現在約尼奎斯特頻率的 80 % 80 % 80%80 \% 處,相機為 0 , 6 cy / mm 0 , 6 cy / mm 0,6cy//mm0,6 \mathrm{cy} / \mathrm{mm} ,掃描器則為 4 , 7 cy / mm 4 , 7 cy / mm 4,7cy//mm4,7 \mathrm{cy} / \mathrm{mm}

E. 5 Focus and depth-of-field considerations
E.5 對焦與景深考量事項

Proper focus and depth-of-field will be assured by pre-focusing the lens at the distance of the subject’s eyes and by selecting an appropriate aperture ( F -stop) to ensure a depth-of-field containing a subject’s nose and ears. The depth-of-field of a lens is dependent upon its focal length, its effective aperture, and the focus distance. Point sources which are closer or farther than the distance at which a lens is well focused will be blurred, with the extent of the blur described by a “circle of confusion”. If the maximum diameter of the circle of confusion is limited by, for example, the spacing between adjacent pixels in a CCD image sensor, the front and rear distances from the plane of optimum focus that produce acceptably focused images can be determined. The sum of these front and rear distances is the depth-of-field.
透過預先將鏡頭對焦於主體眼睛的距離,並選擇適當的光圈值(F 值)以確保景深範圍涵蓋主體的鼻子與耳朵,即可獲得準確的焦點與景深效果。鏡頭的景深取決於其焦距、有效光圈及對焦距離。比鏡頭精準對焦距離更近或更遠的點光源會產生模糊現象,其模糊程度可用「模糊圈」來描述。若模糊圈的最大直徑受到限制(例如 CCD 影像感測器中相鄰畫素的間距),便可計算出從最佳對焦平面前後能產生可接受清晰影像的距離範圍,此前後距離之和即為景深。
D DoF = D front + D rear D front = c F s ( s f ) f 2 + c F ( s f ) D DoF  = D front  + D rear  D front  = c F s ( s f ) f 2 + c F ( s f ) {:[D_("DoF ")=D_("front ")+D_("rear ")],[D_("front ")=(cFs(s-f))/(f^(2)+cF(s-f))]:}\begin{aligned} & D_{\text {DoF }}=D_{\text {front }}+D_{\text {rear }} \\ & D_{\text {front }}=\frac{c F s(s-f)}{f^{2}+c F(s-f)} \end{aligned}
D rear = c F s ( s f ) f 2 c F ( s f ) D rear  = c F s ( s f ) f 2 c F ( s f ) D_("rear ")=(cFs(s-f))/(f^(2)-cF(s-f))D_{\text {rear }}=\frac{c F s(s-f)}{f^{2}-c F(s-f)}
where  其中
D DoF D DoF  D_("DoF ")D_{\text {DoF }} is the depth of field,
是景深,
D front D front  D_("front ")D_{\text {front }} is the front focal distance, the distance from the plane of focus to the plane closest to the lens that is still in acceptable focus,
是前焦距離,即從焦平面到鏡頭最近但仍處於可接受對焦範圍的平面之間的距離,
D rear D rear  D_("rear ")D_{\text {rear }} is the rear focal distance, the distance from the plane of focus to the plane farthest from the lens that is still in acceptable focus,
是後焦距離,即從焦平面到鏡頭最遠但仍處於可接受對焦範圍的平面之間的距離,
c c cc is the diameter of the circle of confusion,
是模糊圈的直徑,
s s ss is the distance from the lens to the object, and
是鏡頭到物件的距離,且
F = f / a F = f / a F=f//aF=f / a is the F-stop, the lens focal length f f ff divided by the effective lens aperture a a aa.
是光圈值,即鏡頭焦距 f f ff 除以有效鏡頭孔徑 a a aa
D_("DoF ") is the depth of field, D_("front ") is the front focal distance, the distance from the plane of focus to the plane closest to the lens that is still in acceptable focus, D_("rear ") is the rear focal distance, the distance from the plane of focus to the plane farthest from the lens that is still in acceptable focus, c is the diameter of the circle of confusion, s is the distance from the lens to the object, and F=f//a is the F-stop, the lens focal length f divided by the effective lens aperture a.| $D_{\text {DoF }}$ | is the depth of field, | | :--- | :--- | | $D_{\text {front }}$ | is the front focal distance, the distance from the plane of focus to the plane closest to the lens that is still in acceptable focus, | | $D_{\text {rear }}$ | is the rear focal distance, the distance from the plane of focus to the plane farthest from the lens that is still in acceptable focus, | | $c$ | is the diameter of the circle of confusion, | | $s$ | is the distance from the lens to the object, and | | $F=f / a$ | is the F-stop, the lens focal length $f$ divided by the effective lens aperture $a$. |
Figure E. 28 illustrates these dimensions.
圖 E.28 展示了這些尺寸。

Figure E. 28 - Dimensions for depth-of-field calculations
圖 E.28 - 景深計算的尺寸

E. 6 Report about the study of the effect of the camera subject distance of reference face images on face verification performances
E. 6 關於參考人臉影像拍攝距離對人臉辨識效能影響之研究報告

E.6.1 What is magnification distortion?
E.6.1 什麼是放大失真?

Taking photographs from a short camera subject distance causes magnification distortion of face images. Figure E. 14 illustrates the magnification distortion. Let the distance between the tip of the nose of a capture subject and the lens of a camera be D D DD, the distance between the eyes plane of the capture subject and the lens of the camera, i. e. the camera-subject distance, be D + Δ D D + Δ D D+Delta DD+\Delta D, and the height of a structure at nose level be h h hh. Then, the structure at nose level appears to have the same size as a structure of the height h + Δ h h + Δ h h+Delta hh+\Delta h at eye level. The magnification distortion is defined as:
從較短的相機拍攝距離拍攝照片會導致人臉影像產生放大失真。圖 E.14 說明了放大失真的現象。假設拍攝對象鼻尖與相機鏡頭之間的距離為 D D DD ,拍攝對象眼部平面與相機鏡頭之間的距離(即相機與被攝體距離)為 D + Δ D D + Δ D D+Delta DD+\Delta D ,而鼻部水平結構的高度為 h h hh 。那麼,鼻部水平的結構看起來會與眼部水平高度為 h + Δ h h + Δ h h+Delta hh+\Delta h 的結構具有相同大小。放大失真的定義如下:
K magnification = Δ D / ( D + Δ D ) × 100 % . K magnification  = Δ D / ( D + Δ D ) × 100 % . K_("magnification ")=Delta D//(D+Delta D)xx100%.K_{\text {magnification }}=\Delta D /(D+\Delta D) \times 100 \% .
If the distance between the nose plane and the eyes plane Δ D Δ D Delta D\Delta D is 50 mm , the magnification distortions are as in Table E.3.
若鼻平面與眼睛平面 Δ D Δ D Delta D\Delta D 之間的距離為 50 毫米,則放大失真情況如表 E.3 所示。
Table E. 3 - Magnification distortion as a function of camera-subject distance
表 E.3 - 放大失真與相機主體距離的函數關係
Camera-subject distance  相機與主體距離 Magnification distortion Δ D D + Δ D  Magnification distortion  Δ D D + Δ D " Magnification distortion "(Delta D)/(D+Delta D)\text { Magnification distortion } \frac{\Delta D}{D+\Delta D}
0 , 5 m 0 , 5 m 0,5m0,5 \mathrm{~m} 10,0 %
0 , 7 m 0 , 7 m 0,7m0,7 \mathrm{~m} 7,1 %
1 , 0 m 1 , 0 m 1,0m1,0 \mathrm{~m} 5,0 %
1 , 2 m 1 , 2 m 1,2m1,2 \mathrm{~m} 4,2 %
1 , 5 m 1 , 5 m 1,5m1,5 \mathrm{~m} 3,3 %
2 , 5 m 2 , 5 m 2,5m2,5 \mathrm{~m} 2,0 %
3 , 0 m 3 , 0 m 3,0m3,0 \mathrm{~m} 1,7 %
Camera-subject distance " Magnification distortion "(Delta D)/(D+Delta D) 0,5m 10,0 % 0,7m 7,1 % 1,0m 5,0 % 1,2m 4,2 % 1,5m 3,3 % 2,5m 2,0 % 3,0m 1,7 %| Camera-subject distance | $\text { Magnification distortion } \frac{\Delta D}{D+\Delta D}$ | | :--- | :--- | | $0,5 \mathrm{~m}$ | 10,0 % | | $0,7 \mathrm{~m}$ | 7,1 % | | $1,0 \mathrm{~m}$ | 5,0 % | | $1,2 \mathrm{~m}$ | 4,2 % | | $1,5 \mathrm{~m}$ | 3,3 % | | $2,5 \mathrm{~m}$ | 2,0 % | | $3,0 \mathrm{~m}$ | 1,7 % |

E.6.2 Methodology from enrolment to score calculation
E.6.2 從註冊到分數計算的方法論

A bench has been created capable of rapidly capturing pictures of the same subject under strictly controlled conditions. That bench (see Figure E.29) ensured that for all face images, the capture conditions were the same and in line with the provisions specified in this document, except for the camera subject distance, which is variable and ranges from 0 , 5 m 0 , 5 m 0,5m0,5 \mathrm{~m} to 3 m . The ten distances are 0 , 5 m 0 , 5 m 0,5m0,5 \mathrm{~m}, 0 , 6 m , 0 , 7 m , 0 , 8 m , 0 , 9 m , 1 , 0 m , 1 , 5 m , 2 , 0 m , 2 , 5 m 0 , 6 m , 0 , 7 m , 0 , 8 m , 0 , 9 m , 1 , 0 m , 1 , 5 m , 2 , 0 m , 2 , 5 m 0,6m,0,7m,0,8m,0,9m,1,0m,1,5m,2,0m,2,5m0,6 \mathrm{~m}, 0,7 \mathrm{~m}, 0,8 \mathrm{~m}, 0,9 \mathrm{~m}, 1,0 \mathrm{~m}, 1,5 \mathrm{~m}, 2,0 \mathrm{~m}, 2,5 \mathrm{~m} and 3 , 0 m 3 , 0 m 3,0m3,0 \mathrm{~m}.
已建立一個能夠在嚴格控制條件下快速拍攝同一主體照片的測試台。該測試台(參見圖 E.29)確保所有人臉圖像的拍攝條件均相同且符合本文件規定,僅拍攝距離為可變參數,範圍從 0 , 5 m 0 , 5 m 0,5m0,5 \mathrm{~m} 至 3 公尺。十個測試距離分別為 0 , 5 m 0 , 5 m 0,5m0,5 \mathrm{~m} 0 , 6 m , 0 , 7 m , 0 , 8 m , 0 , 9 m , 1 , 0 m , 1 , 5 m , 2 , 0 m , 2 , 5 m 0 , 6 m , 0 , 7 m , 0 , 8 m , 0 , 9 m , 1 , 0 m , 1 , 5 m , 2 , 0 m , 2 , 5 m 0,6m,0,7m,0,8m,0,9m,1,0m,1,5m,2,0m,2,5m0,6 \mathrm{~m}, 0,7 \mathrm{~m}, 0,8 \mathrm{~m}, 0,9 \mathrm{~m}, 1,0 \mathrm{~m}, 1,5 \mathrm{~m}, 2,0 \mathrm{~m}, 2,5 \mathrm{~m} 3 , 0 m 3 , 0 m 3,0m3,0 \mathrm{~m}
The bench contains a Canon EOS 6D digital camera:
此測試台配備佳能 EOS 6D 數位相機:
  • 36 mm × 24 mm 36 mm × 24 mm 36mmxx24mm36 \mathrm{~mm} \times 24 \mathrm{~mm} CMOS photo sensor with around 20,2 megapixels ( 5472 × 3648 5472 × 3648 5472 xx36485472 \times 3648 ),
    36 mm × 24 mm 36 mm × 24 mm 36mmxx24mm36 \mathrm{~mm} \times 24 \mathrm{~mm} CMOS 影像感測器,約 2020 萬畫素( 5472 × 3648 5472 × 3648 5472 xx36485472 \times 3648 ),
  • manual focus by SDK,
    手動對焦(透過 SDK)
  • ISO speed set to 100 ,
    ISO 感光度設定為 100
  • aperture = 22 = 22 =22=22,  光圈值 = 22 = 22 =22=22
  • flash synchronization: 1 / 160 1 / 160 1//1601 / 160,
    閃光燈同步: 1 / 160 1 / 160 1//1601 / 160
  • white balance set to flash.
    白平衡設定為閃光燈模式。
The lens used is a Canon EF 50 mm f / 1 , 8 STM 50 mm f / 1 , 8 STM 50mmf//1,8STM50 \mathrm{~mm} \mathrm{f} / 1,8 \mathrm{STM} with radial distortion of less than 0 , 8 % 0 , 8 % 0,8%0,8 \%. The bench uses two front flashes PROFILITE 250 and one background flash PROFILITE 250 (each 250 W ). The two front flashes are set to 5,0 . The background flash is set to 1,0 . The ambient light of the flashes is set to 50 % 50 % 50%50 \%.
使用的鏡頭為 Canon EF 50 mm f / 1 , 8 STM 50 mm f / 1 , 8 STM 50mmf//1,8STM50 \mathrm{~mm} \mathrm{f} / 1,8 \mathrm{STM} ,其徑向畸變小於 0 , 8 % 0 , 8 % 0,8%0,8 \% 。拍攝台配備兩盞前置閃光燈 PROFILITE 250 和一盞背景閃光燈 PROFILITE 250(每盞功率 250 W)。兩盞前置閃光燈強度設定為 5.0,背景閃光燈強度設定為 1.0。閃光燈環境光強度設定為 50 % 50 % 50%50 \%
After seating a test subject in front of the camera of the bench, a capture session is started. The camera automatically captures 40 face images in two passes. In each pass, the camera moves to ten positions with different camera-subject distances from 0 , 5 m 0 , 5 m 0,5m0,5 \mathrm{~m} to 3 m . In the first pass, the camera moves away from the test subject. In the second pass, the camera moves back towards the test subject. At each stop, two images are taken in order to mitigate the risk of closed-eye effect due to flash. After capturing 40 images, the capture session is completed.
當受測者就座於拍攝台相機前方後,即開始拍攝工作階段。相機會自動分兩輪拍攝共 40 張臉部影像。每輪拍攝中,相機會移動至十個不同位置,與受測者距離從 0 , 5 m 0 , 5 m 0,5m0,5 \mathrm{~m} 到 3 公尺不等。第一輪拍攝時,相機會逐漸遠離受測者;第二輪拍攝時,相機會逐漸靠近受測者。每個停駐點會拍攝兩張影像,以降低因閃光燈造成閉眼的風險。完成 40 張影像拍攝後,即結束該次拍攝工作階段。
Time between acquisitions is 12 s , the total duration of a capture session is 4 min . The precision of the movement is below 2 mm ( 0 , 08 % 0 , 08 % 0,08%0,08 \% of the full movement). Given the difficulty to locate the optical centre of the camera lens and given the morphological and behavioural differences of the test subjects, the actual camera-subject distances may be up to 30 mm smaller than the recorded camera subject distances.
每次擷取間隔為 12 秒,單次拍攝工作階段的總時長為 4 分鐘。移動精度低於 2 毫米(完整移動距離的 0 , 08 % 0 , 08 % 0,08%0,08 \% )。由於難以定位相機鏡頭的光學中心,加上測試對象的形態與行為差異,實際相機與被攝物距離可能比記錄的數值短 30 毫米。

Figure E. 29 - Bench created to rapidly capture a number of pictures at different distances
圖 E.29 - 用於快速在不同距離拍攝多張照片的測試平台
Using the bench, local databases of face images with different camera subject distances were collected at the premises of members of the study group from as many volunteer test subjects as possible.
研究小組成員利用此平台,在自身場所盡可能收集來自志願測試者的臉部影像本地資料庫,這些影像具有不同的相機與被攝物距離。
Local databases were collected at the premises of (in temporal order):
本地資料庫依時間順序於下列場所收集:
  • KIS SAS in France,
    法國的 KIS SAS 公司,
  • Oberthur Technologies in France,
    法國的 Oberthur Technologies 公司,
  • Photo-Me International in the UK,
    英國的 Photo-Me International 公司,
  • Gemalto in the Czech Republic,
    捷克共和國的 Gemalto 公司,
  • Fotofix Schnellphotoautomaten in Germany, and
    德國的 Fotofix Schnellphotoautomaten,以及
  • Nippon Auto-Photo in Japan.
    日本的 Nippon Auto-Photo。
Each test subject participated in only one capture session. The captured face images were cropped and resized in conformity with this document (i.e. “ICAO cropped”). The format of the ICAO cropped images was the JPEG file interchange format.
每位受測者僅參與一次拍攝工作階段。所拍攝的人臉影像均依照本文件規範進行裁剪與尺寸調整(即「ICAO 標準裁切」)。ICAO 標準裁切影像的格式採用 JPEG 檔案交換格式。
The local databases were encrypted and then sent to the Biometrics Evaluation Laboratory at the Fraunhofer Institute for Computer Graphics Research IGD, in order to be processed using various state-of-the-art face recognition algorithms.
本地資料庫經加密後,被送往弗勞恩霍夫計算機圖形研究所 IGD 的生物特徵辨識評估實驗室,以便運用各種尖端人臉辨識演算法進行處理。
The local databases were merged into one consolidated database containing 20 ICAO cropped face images from 435 test subjects, i.e. in total 8700 images. The filenames of all face images were pseudonymised such that it was not apparent from the filenames which face images matched and from which camera-subject distance they were captured. The consolidated face image database was divided into a directory of reference face images and a directory of probe face images. For each test subject, the directory of reference images contained ten ICAO cropped face images from the first pass (one per camera-subject distance), and the directory of probe images contained ten ICAO cropped face images from the second pass (one per camera-subject distance). The consolidated face image database was sequestered for the official run of face comparisons.
本地資料庫被合併成一個整合資料庫,其中包含來自 435 名測試對象的 20 張 ICAO 裁切臉部影像,總計 8700 張影像。所有臉部影像的檔案名稱都經過匿名化處理,因此從檔案名稱無法辨識哪些臉部影像是匹配的,也無法得知它們是從何種相機-主體距離拍攝的。整合後的臉部影像資料庫被分為參考臉部影像目錄和探測臉部影像目錄。對於每位測試對象,參考影像目錄包含第一次拍攝的十張 ICAO 裁切臉部影像(每個相機-主體距離各一張),而探測影像目錄則包含第二次拍攝的十張 ICAO 裁切臉部影像(每個相機-主體距離各一張)。整合後的臉部影像資料庫被隔離用於正式的人臉比對測試。

ISO/IEC 39794-5:2019(E)

Providers of commercial off-the-shelf, state-of-the-art algorithms for one-to-one face comparison were invited to participate in this study. The following algorithm providers (in alphabetic order) submitted face comparison software:
本研究邀請了提供商用現成、最先進一對一人臉比對演算法的供應商參與。以下演算法供應商(按字母順序排列)提交了人臉比對軟體:
  • Dermalog,
  • id3 Technologies,
  • Innovatrics,  Innovatrics,
  • NEC, and  NEC,以及
  • OT-Morpho.  OT-Morpho。
The executable face comparison software was submitted by the algorithm providers to Fraunhofer IGD to be executed there on the sequestered face image database.
可執行的臉部比對軟體由演算法供應商提交給 Fraunhofer IGD,並在該機構隔離的臉部影像資料庫上執行。
Each participating face comparison algorithm has been interfaced with two software interfaces. The participating algorithm providers were asked to build two executables in the form of Windows console applications:
每個參與的人臉比對演算法都已與兩個軟體介面連接。參與的演算法供應商被要求建置兩個以 Windows 控制台應用程式形式呈現的可執行檔:
  • extract.exe, to extract comparable features from each of a list of face images, and
    extract.exe,用於從每張人臉影像清單中提取可比較的特徵,以及
  • compare.exe, to compare features and produce a comparison score for each pair of faces given in a comparison list.
    compare.exe,用於比對特徵並為比對清單中的每對人臉產生比對分數。
Each participating face comparison algorithm compared the features from each reference face image with the features from each probe face image. This means 4350 × 4350 = 18922500 4350 × 4350 = 18922500 4350 xx4350=189225004350 \times 4350=18922500 comparisons. For each comparison the file name of the reference image, the file name of the probe image, and the comparison score has been recorded in a CSV file.
每個參與的人臉比對演算法都會將參考人臉影像的特徵與探測人臉影像的特徵進行比對。這意味著進行了 4350 × 4350 = 18922500 4350 × 4350 = 18922500 4350 xx4350=189225004350 \times 4350=18922500 次比對。每次比對時,參考影像的檔案名稱、探測影像的檔案名稱以及比對分數都會被記錄在 CSV 檔案中。

E.6.3 Data analysis  E.6.3 資料分析

E.6.3.1 Methodology  E.6.3.1 方法論

The research hypothesis is that:
研究假設為:
  • the camera-subject distance of a face image or
    臉部影像的相機與主體距離或
  • different magnification distortions of a face image and of probe face images compared with that face image 
    have an effect on the usefulness of that face image as a reference image. The usefulness of biometric sample for telling mated and non-mated samples apart is referred to as “utility”[27]. If for several of the participating state-of-the-art face comparison algorithms, the camera-subject distance of reference images has negligible effect on their utility, the research hypothesis would be refuted. 

E.6.3.2 False non-match rate at fixed false match rate 

For four out of five of the participating algorithms, the highest non-mated similarity score is lower than the lowest mated similarity score, i. e. the distribution of mated scores was clearly separated from that of non-mated scores, allowing perfect classification by setting the decision threshold between the two distributions of scores. No matter what is the allowed value for FMR > 0 % 0 % 0%0 \%, no false non-match error was observed in 43500 mated comparisons. Thus, according to the "Rule of 3 "[28], with 95 % 95 % 95%95 \% confidence FNMR 0 , 0069 % 0 , 0069 % <= 0,0069%\leq 0,0069 \% for most of the participating algorithms. 
Distance-related differences in score distributions can be significant in real scenarios where mated similarity scores are lower because of other factors affecting recognition performance (such as aging, pose variation and illumination). 
A measure of how well the distributions of mated and non-mated comparison scores are separated is d d d^(')d^{\prime} (pronounced “d-prime”), defined as d = | μ m μ n | σ m 2 + σ n 2 d = μ m μ n σ m 2 + σ n 2 d^(')=(|mu_(m)-mu_(n)|)/(sqrt(sigma_(m)^(2)+sigma_(n)^(2)))d^{\prime}=\frac{\left|\mu_{m}-\mu_{n}\right|}{\sqrt{\sigma_{m}{ }^{2}+\sigma_{n}{ }^{2}}}, where 
  • μ m μ m mu_(m)\mu_{m} is the arithmetic mean of the mated comparison scores,
    μ m μ m mu_(m)\mu_{m} 是配對比較分數的算術平均數,
  • μ n μ n mu_(n)\mu_{n} is the arithmetic mean of the non-mated comparison scores,
    μ n μ n mu_(n)\mu_{n} 是非配對比較分數的算術平均數,
  • σ m σ m sigma_(m)\sigma_{m} is the standard deviation of the mated comparison scores, and
    σ m σ m sigma_(m)\sigma_{m} 是配對比較分數的標準差,且
  • σ n σ n sigma_(n)\sigma_{n} is the standard deviation of the non-mated comparison scores[29].
    σ n σ n sigma_(n)\sigma_{n} 為非配對比對分數的標準差[29]。
Figure E. 30 shows the average d d d^(')d^{\prime} values over the three best participating commercial face comparison algorithms as a function of the camera-subject distance of the reference image and the camera-subject distance of the probe image. The individual values are represented as colours. The lowest value is mapped to dark blue and the highest value to dark red.
圖 E.30 顯示了三種最佳商用臉部比對演算法的平均 d d d^(')d^{\prime} 值,這些值隨參考影像的相機-主體距離與探測影像的相機-主體距離而變化。各數值以顏色表示,最低值對應深藍色,最高值對應深紅色。

Figure E. 30 - Average d d d^(')\boldsymbol{d}^{\prime} values over three commercial face comparison algorithms
圖 E. 30 - 三種商用臉部比對演算法的平均 d d d^(')\boldsymbol{d}^{\prime}

E.6.4 Theoretical predictions
E.6.4 理論預測

If a reference face image is compared with N N NN face images, an algorithm returns N N NN scores d 1 d 1 d_(1)d_{1} to s N s N s_(N)s_{\mathrm{N}}. We sort these N N NN values in a decreasing order to get s 1 s 1 s_(1)^(')s_{1}^{\prime} to s N ( s 1 s N s 1 s_(N)^(')(s_(1)^('):}s_{N}^{\prime}\left(s_{1}^{\prime}\right. is the biggest score). Considering a d d d^(')d^{\prime} value over 12 means that the probability to have s 1 s 1 s_(1)^(')s_{1}^{\prime} not corresponds to the right individual is below 2 , 15 10 32 2 , 15 10 32 2,15*10^(-32)2,15 \cdot 10^{-32} (see the demonstration below).
若將參考人臉影像與 N N NN 張人臉影像進行比對,演算法會回傳 N N NN 組分數(範圍從 d 1 d 1 d_(1)d_{1} s N s N s_(N)s_{\mathrm{N}} )。我們將這些 N N NN 個數值依遞減順序排序,得到 s 1 s 1 s_(1)^(')s_{1}^{\prime} s N ( s 1 s N s 1 s_(N)^(')(s_(1)^('):}s_{N}^{\prime}\left(s_{1}^{\prime}\right. 為最高分)。當 d d d^(')d^{\prime} 值超過 12 時,表示該影像 s 1 s 1 s_(1)^(')s_{1}^{\prime} 並非對應正確個體的機率低於 2 , 15 10 32 2 , 15 10 32 2,15*10^(-32)2,15 \cdot 10^{-32} (參見下方證明)。
If N is the world population of 7 10 9 7 10 9 7*10^(9)7 \cdot 10^{9} people, we estimate the probability to have one individual wrongly classified due to magnification distortion extrapolates to 77 10 9 2 , 15 10 32 = 1 , 51 10 22 77 10 9 2 , 15 10 32 = 1 , 51 10 22 77*10^(9)*2,15*10^(-32)=1,51*10^(-22)77 \cdot 10^{9} \cdot 2,15 \cdot 10^{-32}=1,51 \cdot 10^{-22}. This estimate assumes the comparison scores are normally distributed, that the mated pairs are captured in
若全球人口為 7 10 9 7 10 9 7*10^(9)7 \cdot 10^{9} 人,我們估計因放大失真導致單一個體被錯誤分類的機率推斷為 77 10 9 2 , 15 10 32 = 1 , 51 10 22 77 10 9 2 , 15 10 32 = 1 , 51 10 22 77*10^(9)*2,15*10^(-32)=1,51*10^(-22)77 \cdot 10^{9} \cdot 2,15 \cdot 10^{-32}=1,51 \cdot 10^{-22} 。此估計假設比較分數呈常態分佈,且配對樣本被完整採集於

ISO/IEC 39794-5:2019(E)

a single sitting, and that the comparison scores depend only on the probe and reference images (no persearch normalization). If our assumptions are valid, false rejects will hardly ever happen.
一次單一坐姿採集,且比對分數僅取決於探測影像與參考影像(無需每次搜索正規化)。若我們的假設成立,幾乎不會發生錯誤拒絕的情況。
Be aware that the performance of a face recognition system also depends on other factors than distance, like, e. g., illumination, pose, exposure, ageing, which can impact the mate score distribution.
請注意,臉部辨識系統的效能除了取決於距離外,還受其他因素影響,例如光照、姿勢、曝光、老化等,這些都可能影響配對分數的分佈。
Demonstration: Let U n U n U_(n)U_{n} be a random variable representing a non-mated comparison score. Its mean is μ n μ n mu_(n)\mu_{n} and its standard deviation is σ n σ n sigma_(n)\sigma_{n}. Assume that the probability distribution of U n U n U_(n)U_{n} is a normal distribution f n f n f_(n)f_{n}. Let U m U m U_(m)U_{m} be a random variable representing a mated comparison score. Its mean is μ m μ m mu_(m)\mu_{m} and its standard deviation is σ m σ m sigma_(m)\sigma_{m}. Assume that the probability distribution of U m U m U_(m)U_{m} is a normal distribution f m f m f_(m)f_{m}. Furthermore, assume that the probability distribution of U n U n U_(n)U_{n} and the probability distribution of U m U m U_(m)U_{m} are independent.
示範:設 U n U n U_(n)U_{n} 為代表非配對比對分數的隨機變數,其平均值為 μ n μ n mu_(n)\mu_{n} ,標準差為 σ n σ n sigma_(n)\sigma_{n} 。假設 U n U n U_(n)U_{n} 的機率分佈為常態分佈 f n f n f_(n)f_{n} 。設 U m U m U_(m)U_{m} 為代表配對比對分數的隨機變數,其平均值為 μ m μ m mu_(m)\mu_{m} ,標準差為 σ m σ m sigma_(m)\sigma_{m} 。假設 U m U m U_(m)U_{m} 的機率分佈為常態分佈 f m f m f_(m)f_{m} 。此外,假設 U n U n U_(n)U_{n} 的機率分佈與 U m U m U_(m)U_{m} 的機率分佈相互獨立。
Then, X = U m U n X = U m U n X=U_(m)-U_(n)X=U_{m}-U_{n} is also a random variable with a normal distribution f X f X f_(X)f_{X}. Its mean is μ X μ X mu_(X)\mu_{X} and its standard deviation is σ X σ X sigma_(X)\sigma_{X}. The mean and the standard deviation of X X XX can also be determined:
那麼, X = U m U n X = U m U n X=U_(m)-U_(n)X=U_{m}-U_{n} 也是一個具有常態分佈 f X f X f_(X)f_{X} 的隨機變數,其平均值為 μ X μ X mu_(X)\mu_{X} ,標準差為 σ X σ X sigma_(X)\sigma_{X} X X XX 的平均值和標準差也可以確定:
μ X = μ m μ n σ X = σ m 2 + σ n 2 μ X = μ m μ n σ X = σ m 2 + σ n 2 {:[mu_(X)=mu_(m)-mu_(n)],[sigma_(X)=sqrt(sigma_(m)^(2)+sigma_(n)^(2))]:}\begin{aligned} & \mu_{X}=\mu_{m}-\mu_{n} \\ & \sigma_{X}=\sqrt{\sigma_{m}^{2}+\sigma_{n}^{2}} \end{aligned}
Figure E. 31 shows examples of mated and non-mated score distributions and the distribution of differences of mated and non-mated scores.
圖 E. 31 展示了匹配與非匹配分數分佈的範例,以及匹配與非匹配分數差異的分佈情況。
In this study, d [ 12 , 7 ; 14 , 8 ] d [ 12 , 7 ; 14 , 8 ] d^(')in[12,7;14,8]d^{\prime} \in[12,7 ; 14,8]. So, d = | μ m μ n | σ m 2 + σ n 2 = | μ X | σ X > 12 d = μ m μ n σ m 2 + σ n 2 = μ X σ X > 12 d^(')=(|mu_(m)-mu_(n)|)/(sqrt(sigma_(m)^(2)+sigma_(n)^(2)))=(|mu_(X)|)/(sigma_(X)) > 12d^{\prime}=\frac{\left|\mu_{m}-\mu_{n}\right|}{\sqrt{\sigma_{m}{ }^{2}+\sigma_{n}{ }^{2}}}=\frac{\left|\mu_{X}\right|}{\sigma_{X}}>12. So, | μ X | > 12 σ X μ X > 12 σ X |mu_(X)| > 12*sigma_(X)\left|\mu_{X}\right|>12 \cdot \sigma_{X}.
在這項研究中, d [ 12 , 7 ; 14 , 8 ] d [ 12 , 7 ; 14 , 8 ] d^(')in[12,7;14,8]d^{\prime} \in[12,7 ; 14,8] 。因此, d = | μ m μ n | σ m 2 + σ n 2 = | μ X | σ X > 12 d = μ m μ n σ m 2 + σ n 2 = μ X σ X > 12 d^(')=(|mu_(m)-mu_(n)|)/(sqrt(sigma_(m)^(2)+sigma_(n)^(2)))=(|mu_(X)|)/(sigma_(X)) > 12d^{\prime}=\frac{\left|\mu_{m}-\mu_{n}\right|}{\sqrt{\sigma_{m}{ }^{2}+\sigma_{n}{ }^{2}}}=\frac{\left|\mu_{X}\right|}{\sigma_{X}}>12 。因此, | μ X | > 12 σ X μ X > 12 σ X |mu_(X)| > 12*sigma_(X)\left|\mu_{X}\right|>12 \cdot \sigma_{X}

Figure E. 31 - Distribution of differences of mated and non-mated scores
圖 E. 31 - 匹配與非匹配分數差異分佈
Let X X XX be normally distributed with mean | μ X | = 12 σ X μ X = 12 σ X |mu_(X)|=12*sigma_(X)\left|\mu_{X}\right|=12 \cdot \sigma_{X} and standard deviation σ X σ X sigma_(X)\sigma_{X}.
假設 X X XX 服從平均值為 | μ X | = 12 σ X μ X = 12 σ X |mu_(X)|=12*sigma_(X)\left|\mu_{X}\right|=12 \cdot \sigma_{X} 、標準差為 σ X σ X sigma_(X)\sigma_{X} 的常態分佈。
Let X = X 12 σ X σ X X = X 12 σ X σ X X^(')=(X-12*sigma_(X))/(sigma_(X))X^{\prime}=\frac{X-12 \cdot \sigma_{X}}{\sigma_{X}}. Then, X X X^(')X^{\prime} is normally distributed with mean 0 and standard deviation 1 , and P ( X 12 ) = 2 , 15 10 32 P X 12 = 2 , 15 10 32 P(X^(') <= -12)=2,15*10^(-32)P\left(X^{\prime} \leq-12\right)=2,15 \cdot 10^{-32}.
假設 X = X 12 σ X σ X X = X 12 σ X σ X X^(')=(X-12*sigma_(X))/(sigma_(X))X^{\prime}=\frac{X-12 \cdot \sigma_{X}}{\sigma_{X}} 。那麼, X X X^(')X^{\prime} 將服從平均值為 0、標準差為 1 的常態分佈,且 P ( X 12 ) = 2 , 15 10 32 P X 12 = 2 , 15 10 32 P(X^(') <= -12)=2,15*10^(-32)P\left(X^{\prime} \leq-12\right)=2,15 \cdot 10^{-32}
P ( X 12 ) = P ( X 12 σ X σ X 12 ) = P ( X 12 σ X 12 σ X ) = P ( X 0 ) P X 12 = P X 12 σ X σ X 12 = P X 12 σ X 12 σ X = P ( X 0 ) P(X^(') <= -12)=P((X-12*sigma_(X))/(sigma_(X)) <= -12)=P(X-12*sigma_(X) <= -12*sigma_(X))=P(X <= 0)P\left(X^{\prime} \leq-12\right)=P\left(\frac{X-12 \cdot \sigma_{X}}{\sigma_{X}} \leq-12\right)=P\left(X-12 \cdot \sigma_{X} \leq-12 \cdot \sigma_{X}\right)=P(X \leq 0)
So, P ( X 0 ) = 2 , 15 10 32 P ( X 0 ) = 2 , 15 10 32 P(X <= 0)=2,15*10^(-32)P(X \leq 0)=2,15 \cdot 10^{-32}.  所以, P ( X 0 ) = 2 , 15 10 32 P ( X 0 ) = 2 , 15 10 32 P(X <= 0)=2,15*10^(-32)P(X \leq 0)=2,15 \cdot 10^{-32}

E.6.5 Conclusions  E.6.5 結論

Based on the data collected for this study, camera subject distance does not have a great influence on face verification performance over the range of camera subject distances investigated. Over a database of excellent-quality face images of 435 test subjects, captured at single capture sessions per test subject from different camera subject distances, several face verification algorithms avoided verification errors altogether.
根據本研究收集的數據,在所調查的相機與被攝體距離範圍內,相機與被攝體距離對人臉辨識性能的影響不大。在一個包含 435 名測試對象的高品質人臉影像資料庫中(每位測試對象在單次拍攝會話中從不同相機距離拍攝),多種人臉辨識算法完全避免了辨識錯誤。
It is noticeable that the average d d d^(')d^{\prime} value is not sensitive at all to a reference distance above 0 , 7 m 0 , 7 m 0,7m0,7 \mathrm{~m}, i.e., below 7 , 1 % 7 , 1 % 7,1%7,1 \% of magnification distortion. Nearly symmetrically, the average d d d^(')d^{\prime} value is not sensitive at all to a probe distance above 0 , 7 m 0 , 7 m 0,7m0,7 \mathrm{~m}, i.e., below 7 , 1 % 7 , 1 % 7,1%7,1 \% magnification distortion. Even at 0 , 5 m 0 , 5 m 0,5m0,5 \mathrm{~m}, i.e., at 10 % 10 % 10%10 \% magnification distortion d d d^(')d^{\prime}, decreases only 15 % 15 % 15%15 \% to 12,7 versus the maximum value of 14,8 obtained with magnification distortion below 7 , 1 % 7 , 1 % 7,1%7,1 \%.
值得注意的是,當參考距離超過 0 , 7 m 0 , 7 m 0,7m0,7 \mathrm{~m} (即放大畸變低於 7 , 1 % 7 , 1 % 7,1%7,1 \% )時,平均 d d d^(')d^{\prime} 值完全不受影響。幾乎對稱地,當探測距離超過 0 , 7 m 0 , 7 m 0,7m0,7 \mathrm{~m} (即放大畸變低於 7 , 1 % 7 , 1 % 7,1%7,1 \% )時,平均 d d d^(')d^{\prime} 值也完全不受影響。即使在 0 , 5 m 0 , 5 m 0,5m0,5 \mathrm{~m} (即放大畸變達 10 % 10 % 10%10 \% d d d^(')d^{\prime} )的情況下,數值僅從最大值 14.8(放大畸變低於 7 , 1 % 7 , 1 % 7,1%7,1 \% 時獲得)下降 15 % 15 % 15%15 \% 至 12.7。
The enrolment and verification images should be captured from a similar distance whenever possible.
盡可能以相近的距離擷取註冊與驗證影像。

E. 7 Example of exposure metering at various spots on a subject
E. 7 被攝體各部位曝光測光範例

The exposure value (EV) is the value given to any combination of shutter speed and aperture (F-stop) that results in the same exposure. By definition, an EV value of 0 corresponds to a shutter speed of 1 second and an aperture of F1,0, for a film speed or equivalent image sensor sensitivity of ISO 100. EV is defined by:
曝光值(EV)是指任何快門速度與光圈(F 值)組合所產生的相同曝光量。根據定義,當底片感光度或等效影像感測器靈敏度為 ISO 100 時,EV 值 0 對應於 1 秒的快門速度與 F1.0 的光圈。EV 的計算公式為:
E V = log 2 ( F 2 T ) = 2 log 2 ( F ) log 2 ( T ) E V = log 2 F 2 T = 2 log 2 ( F ) log 2 ( T ) EV=log_(2)((F^(2))/(T))=2log_(2)(F)-log_(2)(T)E V=\log _{2}\left(\frac{F^{2}}{T}\right)=2 \log _{2}(F)-\log _{2}(T)
where  哪裡
F F FF is the F-stop setting;
F F FF 是光圈設定;

T T T quadT \quad is the exposure time.
T T T quadT \quad 是曝光時間。

A change of 1 EV corresponds to a one F-stop aperture increase or decrease or a halving or doubling of the exposure time.
1 EV 的變化對應於光圈值增減一級或曝光時間減半或加倍。

Bibliography  參考文獻

[1] AAMVA DL/Identifier-2000, American Association of Motor Vehicle Administrators National Standard for the Driver License/Identification Card
[1] AAMVA 駕駛執照/身分證-2000,美國機動車輛管理協會駕駛執照/身分證國家標準

[2] Anthropometry of the Head and Face, second edition, Leslie G. Farkas, Raven Press, New York, 1994
[2] 頭部與臉部人體測量學(第二版),Leslie G. Farkas 著,Raven 出版社,紐約,1994 年

[3] C-Cube Microsystems, JPEG File Interchange Format (JFIF), Version 1.02
[3] C-Cube 微系統公司,JPEG 檔案交換格式 (JFIF),1.02 版

[4] International Color Consortium, Available at www.color.org
[4] 國際色彩聯盟,可於 www.color.org 查閱

[5] NIST Best Practice Recommendation For The Capture Of Mugshots, Version 2.0, 1997
[5] 美國國家標準技術研究院最佳實務建議:人像照片拍攝指南,第二版,1997 年

[6] PIMA 7667:2001, Photography - Electronic Still Picture Imaging - Extended sRGB Color Encoding - e-sRGB
[6] PIMA 7667:2001,攝影技術-電子靜態影像-擴展 sRGB 色彩編碼-e-sRGB

[7] The Methods of Plane Projective Geometry Based on the Use of General Homogenous Coordinates, E.A. Maxwell, Cambridge University Press, 1960
[7] 基於通用齊次座標的平面投影幾何方法,E.A. 麥克斯韋,劍橋大學出版社,1960 年

[8] Verständigung über ein gemeinsames craniometrisches Verfahren (Frankfurter Verständigung). Ranke, J. (ed.). (1884). Archive Anthropologie, 15, 1-8
[8] 關於共同顱骨測量方法的協議(法蘭克福協議)。Ranke, J. (編)。(1884)。人類學檔案,15,1-8 頁

[9] ISO 20473:2007, Optics and photonics — Spectral bands
[9] ISO 20473:2007,光學與光子學 — 光譜波段

[10] ISO/IEC 8825-1:2015, Information technology - ASN. 1 encoding rules: Specification of Basic Encoding Rules (BER), Canonical Encoding Rules (CER) and Distinguished Encoding Rules (DER) - Part 1
[10] ISO/IEC 8825-1:2015,資訊技術 - ASN.1 編碼規則:基本編碼規則(BER)、規範編碼規則(CER)與可辨識編碼規則(DER)之規範 - 第 1 部分

[11] IEC 61966-8:2001, Multimedia systems and equipment - Colour measurement and management Part 8: Multimedia colour scanners
[11] IEC 61966-8:2001,多媒體系統與設備 - 色彩量測與管理 第 8 部分:多媒體色彩掃描器

[12] ANSI IT8.7/2-1993 (R2013) Graphic technology - Color reflection target for input scanner calibration
[12] ANSI IT8.7/2-1993 (R2013) 圖形技術 - 輸入掃描器校準用色彩反射標靶

[13] ISO 12641-1, Graphic technology - Prepress digital data exchange - Colour targets for input scanner calibration - Part 1: Colour targets for input scanner calibration
[13] ISO 12641-1,圖形技術 - 印前數位資料交換 - 輸入掃描器校準用色標 - 第 1 部分:輸入掃描器校準用色標

[14] ISO/CIE 11664-3, Colorimetry - Part 3: CIE tristimulus values
[14] ISO/CIE 11664-3,色度學 - 第 3 部分:CIE 三刺激值

[15] ISO/IEC 11664-4, Colorimetry - Part 4: CIE 1976 L a b a b ^(**)a^(**)b^(**)^{*} a^{*} b^{*} colour space
[15] ISO/IEC 11664-4,色度學 - 第 4 部分:CIE 1976 L*a*b*色彩空間

[16] IEC 61966-2-1:1999, Multimedia systems and equipment - Colour measurement and management - Part 2-1: Colour management - Default RGB colour space - sRGB
[16] IEC 61966-2-1:1999,多媒體系統與設備 - 色彩量測與管理 - 第 2-1 部分:色彩管理 - 預設 RGB 色彩空間 - sRGB

[17] Rochester Institute of Technology https://www.cs.rit.edu/~ncs/color/t_convert.html#RGB
[17] 羅徹斯特理工學院 https://www.cs.rit.edu/~ncs/color/t_convert.html#RGB

[18] Report Field Application, Materials Characterization: UV/Vis/NIR Spectroscopy, Jillian F. Dlugos, Jeffrey L. Taylor, Ph.D., PerkinElmer, Inc., 2012
[18] 報告應用領域:材料特性分析:紫外/可見/近紅外光譜技術,Jillian F. Dlugos、Jeffrey L. Taylor 博士,珀金埃爾默公司,2012 年

[19] Digital imaging & image processing techniques for the comparison of human hair features, Carolyn Jane McLaren, University of Canberra, Faculty of Applied Science, 2012
[19] 運用數位影像與影像處理技術比對人類毛髮特徵,Carolyn Jane McLaren,坎培拉大學應用科學學院,2012 年

[20] Grading of Iris Color with an Extended Photographic Reference Set, Luuk Franssen, Joris E. Coppens, Thomas J.T.P. van den Berg , , Netherlands Institute for Neuroscience, an institute of the Royal, Netherlands Academy of Arts and Sciences. Amsterdam, The Netherlands, 2008
[20] 虹膜顏色分級之擴展攝影參考集應用,Luuk Franssen、Joris E. Coppens、Thomas J.T.P. van den Berg,荷蘭神經科學研究所(隸屬荷蘭皇家藝術與科學學院),阿姆斯特丹,2008 年

[21] IEC 62676-4, Video surveillance systems for use in security applications - Part 4: Application guidelines 
[22] Spectral Filter Optimization for the Recovery of Parameters Which Describe Human Skin, Stephen J. Preece and Ela Claridge, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 26, No. 7, July 2004 
[23] CIE Delta E, 2000. Improvement to industrial color-difference evaluation. Vienna: CIE Publication No. 142-2001, Central Bureau of the CIE; 2001 
[24] ISO 16067-1:2003, Photography - Spatial resolution measurements of electronic scanners for photographic images - Part 1: Scanners for reflective media 
[25] Characterising the variations in ethnic skin colours: a new calibrated data base for human skin. K. Xiao, J. M. Yates, F. Zardawi, S. Sueeprasan, N. Liao, L. Gill, C.Li and S. Wuerger. Skin Research and Technology 2017; 23: 21-29. Published by John Wiley & Sons Ltd, 2016 
[26] Measuring H.S.C., Mengmeng Wang, Kaida Xiao, Sophie Wuerger, Vien Cheung, Ming Ronnier Luo. 23rd Color and Imaging Conference Final Program and Proceedings, Society for Imaging Science and Technology, 2015 
[27] ISO/IEC 29794-1, Information technology - Biometric sample quality - Part 1: Framework 
[28] ISO/IEC 19795-1, Information technology - Biometric performance testing and reporting Part 1: Principles and framework 
[29] Bolle R.M., Pankanti S., Ratha N.K., Evaluation techniques for biometrics-based authentication systems (FRR). In Proceedings of the 15th International Conference on Pattern Recognition ICPR, volume 2, 2000 
[30] Technical Guideline TR-03121-3: Biometrics for public sector applications, Part 3: Application Profiles and Function Modules, Volume 1: Verification scenarios for ePassport and Identity Card, Version 3.0.1. 2013 
[31] ISO/IEC 19785 (all parts), Information technology - Common Biometric Exchange Formats Framework 
[32] ISO/IEC 14496-1, Information technology - Coding of audio-visual objects - Part 1: Systems 
[33] Netpbm image format, http://netpbm.sourceforge.net/doc/pgm.html 
[34] Netpbm color image format, http://netpbm.sourceforge.net/doc/ppm.html 
[35] W3C XML sources: http://www.W3C.org/XML/SCHEMA.html#Tools 
[36] ITU-T ASN. 1 sources: https://www.itu.int/en/ITU-T/asn1/Pages/Tools.aspx 
[37] ISO 15739:2017, Photography - Electronic still-picture imaging - Noise measurements 
[38] ICAO Technical Report, Portrait Quality (Reference Facial Images for MRTD): https://www.icao .int/Security/FAL/TRIP/Documents/TR%20-%20Portrait%20Quality%20v1.0.pdf 
This page has been left intentionally blank.
本頁面刻意留白。