Friday 31 March 2023

ChatGPT - a little serious note

 Whilst I've been having some fun doing my research, I think will only be comfortable trusting a machine for other insights outside of the coding / business / tech domain, when I have open access to all its data sources and citations that showed the formulation of the response. I want to know what the machine is trained on, so that I can make my own judgements on the responses. AI companies must openly public these sorts, publish a bias index, something like a cultural sentiment index, and show whether sandboxing different philosophies / social frameworks and constructs. Without this access, I am okay to limit my interactions to well-bounded, technical and professional contexts to help me be more productive, do research, as an another tool to make humans optimise, etc. Beyond that, I will not entirely trust the machine  enough yet to have deep, meaningful discussions about life, world politics and human civilisation challenges because of biases that haven't been independently tested for, etc. I believe the entire test-suite for validating the safety and quality of the AI must be released in the open, for independent scrutiny. If tech companies are truly serious about uplifting human civilisation with general AIs, then stop with the profiteering and start with building humanity on open transparency, level the playing fields. This is not a space race to outcompete one another. This not the next tech to make yet another tech geek nerd the richest person in the world! This is not about making money - I'm talking about the broad uses of the AI. Sure, you can make some money for constrained topics - but don't go releasing AI into the wild that can cause more serious harm than good.

Given this technology is mostly born or heavily marketed from capitalist America (my bias: winner-takes-all, survival of the fittest, competitive, good guy vs bad guys, profit before people, we must always strive to be the world's dominant region - unfortunately overshadows what made American ideals great, the American dream, a place I always wished to go and work at) roots, makes me very nervous about the potential misuse, misunderstanding of the machine. I fear the future of children growing up with this tech, like what constructs they would learn, potentially throwing thousands years of human traditions, culture and beliefs. As such, I'm leaning more and more on controlling these releases, and perhaps escalating regulatory reviews to inspect deeply...we don't throw babies into shark-infested waters the first day they're borne, so why expose such a powerful, misleading tech to the whole world, children and adults alike?? We don't give guns to children, do we? We have checks and balances in place for all kinds of risky tech? Take nuclear tech for example? Honestly, this topic is so broad and wide, concerning...it's a pity that Lex didn't address these topics directly with Sam in their interesting conversation here

Exploring ChatGPT will almost become a full-time activity outside of my work hours - so many unknowns, so much of things worth exploring...but my priority would be humans first, machine last. If the pace of progress needs to be slowed down until we understand more, so be it. There should be a global moratorium on this tech, a world governing framework like NATO/UN/etc. to regulate. 

So my view: Proceed with extreme caution, be bold to explore the limits of the machine for a well-bounded context limited to science / computing / business / historical facts - but nothing more...until there's a way to characterise these AIs by archetypes we can identify with.

I have to say, I am leaning heavily on the side of regulation - putting on the brakes!!

IMHO that is...

Time to start my day job!! Plugging into the matrix now...

No comments:

Post a Comment