Future of Privacy Forum Publishes XR Biometric Data Framework

Future of Privacy Forum Publishes XR Biometric Data Framework

[ad_1]

The Future of Privacy Forum published a framework for biometric data regulations for immersive technologies on Tuesday.

The FPF’s Risk Framework for Body-Related Data in Immersive Technologies report discusses best practices for collecting, using, and transferring body-related data across entities.

Organisations, businesses, and individuals can incorporate the FPF’s observations as recommendations and a foundation for facilitating safe, responsible extended reality (XR) policies. This relates to entities requiring large amounts of biometric data in immersive technologies.

Furthermore, those following the guidelines of the report can apply the framework to document reasons and methodologies for handling biometric data, comply with laws and standards, evaluate risks associated with privacy and safety, and ethical considerations when collecting data from devices.

The framework applies not only to XR-related organisations but also to any institution leveraging technologies dependent on the processing of biometrics.

Jameson Spivack, Senior Policy Analyst, Immersive Technologies, and Daniel Berrick, Policy Counsel, co-authored the report.

Your Data: Handled with Care

In order to understand how to handle personal data, organisations must identify potential privacy risks, ensure compliance with laws, and implement best practices to boost safety and privacy, the FPF explained.

Body-related data risk framework
Body-related data risk framework: PHOTO: Future for Privacy Forum

According to Stage One of the framework, organisations can do so by:

  • Creating data maps that outline their data practices linked to biometric information
  • Documenting their use of data and practices
  • Identifying pertinent stakeholders, direct and third-party, affected by the organisation’s data practices

Companies would analyse applicable legal frameworks in Stage Two to ensure compliance. This would involve companies collecting, using, or transferring “body-related data” impacted by US privacy laws.

To comply, the framework recommends that organisations “understand the individual rights and business obligations” applicable to “existing comprehensive and sectoral privacy laws,” it read.

Organisations should also analyse emerging laws and regulations and how they would impact “body-based data practices.”

In Stage Three, companies, organisations, and institutions should identify and assess risks to others. It explained that this includes the individuals, communities, and societies they serve.

It said that privacy risks and harms could derive from data “used or handled in particular ways, or transferred to particular parties,” it said.

It added that legal compliance “may not be enough to mitigate risks.”

In order to maximise safety, companies can follow several steps to protect data, such as proactively identifying and reducing risks associated with data practices.

This would involve impacts on the following:

    • Identifiability
    • Use to make key decisions
    • Sensitivity
    • Partners and other third-party groups
    • The potential for inferences
    • Data retention
    • Data accuracy and bias
    • User expectations and understanding

After evaluating a group’s data use policy, organisations can assess the fairness and ethics behind its data practices, based on identified risks, it explained.

Data Categories and Data Types
Data Categories and Data Types. PHOTO: Future of Privacy Forum

Finally, the FPF framework recommended the implementation of best practices in Stage Four, which involved a “number of legal, technical, and policy safeguards organisations can use.

It added this would help organisations keep updated with “statutory and regulatory compliance, minimize privacy risks, and ensure that immersive technologies are used fairly, ethically, and responsibly.”

The framework recommends that organisations intentionally implement best practices by comprehensively “touching all parts of the data lifecycle and addressing all relevant risks.”

Organisations can also collaboratively implement best practices using those “developed in consultation with multidisciplinary teams within an organization.”

These would involve legal product, engineering, trust, safety, and privacy-related stakeholders.

Organisations can protect their data by:

  • Localising and processing data on devices and storage
  • Minimising data footprints
  • Regulating or implementing third-party management
  • Offering meaningful notice and consent
  • Preserving data integrity
  • Providing user controls
  • Incorporating privacy-enhancing technologies

Following these best practices, organisations could evaluate best practices and suitably align them as a coherent strategy. Afterwards, they could assess the best practices on an ongoing basis to maintain efficacy.

EU Proceeds with Artificial Intelligence (AI) Act

The news comes right after the European Union moved forward with its AI Act, which the FPF states will have a “broad extraterritorial impact.”

Currently under negotiations with member-states, the legislation aims to protect citizens from harmful and unethical use of AI-based solutions.

The organisation is offering guidance, expertise, and training for companies after the Act prepares to enter force. This has led to one of the biggest changes in data privacy policy since the introduction of the General Data Protection Regulation (GDPR) in May 2016.

The European Commission stated it wants to “regulate artificial intelligence (AI)” to ensure improved conditions for using and rolling out the technology.

It said in a statement,

“In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation. Once approved, these will be the world’s first rules on AI”

According to the Commission, it aims to approve the Act by the end of the year.

Biden-Harris Executive Order on AI

In late October, the Biden-Harris administration implemented an executive order on the regulation of AI. The Government’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence aims to safeguard citizens around the world from the harmful effects of AI programmes.

Enterprises, organisations, and experts will need to comply with the new regulations for “developers of the most powerful AI systems” to share their safety assessments with the US Government.

Responding to the Plan, the FPF said it was “incredibly comprehensive” and offered a “whole of government approach and with an impact beyond government agencies.”

It continued in its official statement,

“Although the executive order focuses on the government’s use of AI, the influence on the private sector will be profound due to the extensive requirements for government vendors, worker surveillance, education and housing priorities, the development of standards to conduct risk assessments and mitigate bias, the investments in privacy enhancing technologies, and more”

The statement also called on lawmakers to implement “bipartisan privacy legislation.” Doing so was “the most important precursor for protections for AI that impact vulnerable populations.”

UK Hosts AI Security Summit

Additionally, the United Kingdom also hosted its AI Security Summit at the iconic Bletchley Park, where world-renowned scientist Alan Turing cracked the Nazi’s World War II-era Enigma cryptography.

At the world-class event, some of the industry’s top-level experts, executives, companies, and organisations gathered to outline protections to regulate AI.

This has included the US, UK, EU, and UN governments, the Alan Turing Institute, The Future of Life Institute, Tesla, OpenAI, and many others. The groups discussed methods to create a shared understanding of the risks of AI, collaborate on best practices, and develop a framework for AI safety research.

The Fight for Data Rights

The news comes as multiple organisations enter fresh alliances in order to tackle ongoing concerns over the use of virtual, augmented, and mixed reality (VR/AR/MR), AI, and other emerging technologies.

For example, Meta Platforms and IBM launched a massive alliance united to develop best practices for artificial intelligence, biometric data, and to help create regulatory frameworks for tech companies worldwide.

The Global AI Alliance hosts more than 30 organisations, companies, and individuals from across the global tech community and includes tech giants such as AMD, HuggingFace, CERN, The Linux Foundation, and others.

Furthermore, organisations like the Washington, DC-based XR Association, Europe’s XR4Europe alliance, the globally-recognised Metaverse Standards Forum, and the Gatherverse, among others, have contributed enormously to the implementation of best practices for those involved in building the future of spatial technologies.

 

 



[ad_2]

Related Posts
Leave a Reply

Your email address will not be published.Required fields are marked *