When Lorena Almaraz was working in the non-profit sector, she kept hearing about the need to be data driven, but, at the same time, she saw a major contradiction – almost no one had the staff or resources to actually be data driven. The more Almaraz explored the problem of using data properly and making sense of it, the more interested she became in the gap between policy and data science and how she could help fill it.
As she searched for a suitable Master’s program, Almaraz, who did her undergraduate degree in fine arts and cognitive science, focused on multidisciplinary programs that were not necessarily pure data science. “I was interested in something that was a little bit more socially oriented, that was a little bit more critical,” she explains. “I wanted to do the policy of data science.”
Eventually, she chose the Master of Information program and, more specifically, the Human-Centred Data Science concentration. That choice allowed Almaraz to get the technical background she needed – including courses in machine learning and R, a programming language for statistical computing – while also taking several elective courses from the Critical Information Policies Study concentration.
For a policy reading course, Almaraz wrote, designed and published Bits&BIPOC, a socio-technical learning guide on data, algorithms, machine learning geared toward young Black, Indigenous, and People of Colour in Canada, encouraging them to engage in the ongoing technology regulation debate.
As a student during the pandemic, when almost all courses were online and social activities were strictly curtailed, Almaraz made the most of things by taking on research assistantships and contract work as well as doing a co-op work term with Ontario Digital Service. By the time she graduated in the spring of 2022, AI regulation was very much on the corporate agenda and she had the skill set employers were looking for.
“Companies using AI were wondering, ‘Oh my gosh, are we doing it right? Are we compliant with what’s expected of us?’” says Almaraz, who now works for the multinational information conglomerate, Thomson Reuters as a Senior Artificial Intelligence/Machine Learning Model Steward Partner.
In this AI governance role, she works directly with the company’s technical teams to ensure they are aware of standards and policies and compliant with them. “I create the training. I create the workshops. I essentially bring the policy to the technical teams and then represent the technical teams when we’re developing the policy. It’s really fun.”
For example, Almaraz talks to data scientists about ensuring that algorithms are unbiased. “They might say, ‘What do you mean? What does that look like in code?’” And she explains it to them.
As for what’s next, Almaraz says, “Now that I’ve seen the ground level work for this kind of field, I’d love to get some management experience and develop a project plan myself and see it through as I was doing, working in nonprofits. I want to see what those skills look like in this field.”