Page 20 - Security Today, November/December 2024
P. 20
S E C U R I T Y T R E N D S
TheWaterMeloonProject/stock.adobe.com
“Are AI providers using privileged
information to train their solutions?
Are they using copyrighted
material? Are their data sets
discriminatory in any way? How
do they find, account for, and end
inherent biases?”
Commission says “will ensure that companies in scope identify
and address adverse human rights and environmental impacts of
their actions inside and outside Europe.”
At its core, the law requires businesses to conduct appropriate
due diligence into the potential human rights and environmental
impact of the company’s operations, as well as that of its subsid-
iaries, partners and vendors. This is key, because it means organi-
zations are not just responsible for their own actions, but those of
businesses up and down the supply chain.
This means businesses have a responsibility to ensure, for ex-
ample, that their products are not manufactured in sweat shops
and their materials are not obtained via slave labor. It also means
they cannot simply shift pollution-heavy activities to countries
with lax environmental laws. Under CSDDD, businesses have an
obligation to ensure that they are working in a manner consistent
with both the human rights and environmental ideal of the EU.
Critically, businesses operating in the EU all share this obliga-
tion—an individual business looking to gain a competitive ad-
vantage by circumventing the law risks steep fi nancial penalties
(not to mention reputational damage).
CSDDD falls within a category that the United Nations terms
“Human Rights Due Diligence” (HRDD). And while the UN
lacks a meaningful enforcement mechanism, the ideals outlined
in the organization’s “Guiding Principles on Business and Hu-
2 0 man Rights” have infl uenced the shape and direction of regula-
tions like CSDDD—as well regulations in other countries.
The full impact of CSDDD has yet to be felt (as with GDPR,
many organizations are likely holding their breath to see what
the fi rst enforcement actions and fi nes look like), but the gradual
shift away from human rights violators has already begun. Busi-
nesses in the technology industry (security included) should be
evaluating their supply chains with both human rights and envi-
ronmental sustainability in mind.
ARTIFICIAL INTELLIGENCE
IS INCREASINGLY UNDER THE MICROSCOPE
As AI has evolved—and become more mainstream—regulators
have taken a renewed interest in the technology. AI solutions re-
quire vast amounts of data to train them effectively, and where
that data comes from matters.
Are AI providers using privileged information to train their
solutions? Are they using copyrighted material? Are their data
sets discriminatory in any way? How do they fi nd, account for,
and end inherent biases?
This last point is particularly important — for example, a fa-
cial recognition solution that struggles to differentiate between
people of color can (and likely will) result in serious and dam-
aging discrimination. That will negatively affect the customer’s
reputation, but it can also have signifi cant legal repercussions. It’s
important to remember that it is not just about the data — how
AI is used in practice needs to be carefully considered as well.
Unfortunately, the rapid pace of technological advancement
means regulations tend to lag behind — but the EU took a step in
the right direction this year when it passed the EU AI Act, billed
as “the world’s fi rst comprehensive AI law.” The new law breaks
AI applications and systems into three risk categories: unaccept-
able risk, high risk, and low risk.
Those that constitute an unacceptable risk — such as the gov-
ernment-run social scoring systems used in some countries — are
N O V E M B E R / D E C E M B E R 2 0 2 4 | S E C U R I T Y T O D A Y