I think your idea has potential. It's a good sign you're paying attention to possible pitfalls in dealing with the subject. I think the greatest difficulty is in keeping it interesting, as it's a popular subject these days.
I skimmed your script and did some have thoughts.
First, this feels like less than half the script. This isn't inherently a problem, but it may take you longer to finish than you anticipate. You've said the real meat of it is supposed to be the moral issues and perception of AI. What you have so far is focused on worldbuilding, establishing the background in which the plot takes place, including the role AI has in the world and how different people view it. This is essential to the story... but it's not the story itself. Yes, there have been signs of unexpected behaviors from androids, but this is pretty much a given in any story dealing with AI. (Moreover, because it's in a story, the unexpected behavior is expected by the reader to be significant.) You're to the point of introducing the plot, but the plot is not too far along yet.
I will mention that the script uses the term AI wrong. Artificial intelligence just means that a program or robot or whatever works in a seemingly intelligent way, but it's not what we call "intelligence" because a given AI only works for a specific purpose. Hence, it's artificial. The concept you're getting at is REAL intelligence. However, I think the thing people are most concerned with is sentience -- regardless of whether it's intelligent, can it feel pain, pleasure, boredom, etc? Consciousness. It's separate from intelligence, though the two are both issues when we talk about making humanlike AIs.
I think the worldbuilding concepts you have are solid. I like the backdrop of the weaponization of AI. But you'd benefit from more research on the state of AI, especially when it comes to a test for intelligence. The purpose won't be to make large changes to the script, but you can think of worldbuilding in a story as like an iceberg. Most of what you learn won't show up in the script, but the tip of it that does will have more depth.
I had a computer science professor that did research on computer creativity. He wrote a computer AI that could write a short story. The result was pretty uninspiring. And his conclusion was that computers are not, and CAN not, be truly creative. I mean, he had written a program that looked at existing (human) short stories, "learned" what the elements are, and put together something based on the inputs it had.
I also had a sociology professor who pointed out that if that's the standard, humans cannot be "truly creative" either; our outputs may be more complex and more appealing to other humans, but it's very much a culmination of the ideas, stories, and other inputs we've been exposed to throughout life. Personally I felt that the comp sci professor not a very good writer himself and probably couldn't write a good story, much less write an AI that writes a good story. I mean, good writing doesn't come easy even for a human.
This was 18 years ago. AIs have gotten far more advanced, but there is still a very big pitfall: garbage in, garbage out. Many modern AIs are based on a neural network model, trained using massive datasets. They learn from the data. However, what they learn may not be what you intended. There's a story about an AI that was designed to learn how to identify camouflaged tanks in a picture. It seemed to work well, but then failed in further tests. Why? Because the pictures it was trained on had tanks taken on cloudy days and non-tank pictures taken on sunny days. The AI hadn't learned how to identify tanks. It had learned how to distinguish cloudy days from sunny days.
There are similar concerns about AIs like Watson. Watson has advanced enough language processing to win Jeopardy. There are several ways IBM has been marketing this technology. But last fall a scathing critique was released about one of its pursuits: cancer diagnosis and treatment. Human doctors require years of training and don't have the time to read and keep up with every medical paper published. So the idea is that a trained AI could "read" the papers and identify better, evidence-based treatments than human doctors can. Of course, even from the start there is a chicken-and-egg problem. Watson has to be trained by human doctors, so the results are going to be... the same things human doctors come up with. And that's only the surface of the rabbit hole. (Medicine is *incredibly* complicated.) IMO, AI is not ready to tackle medicine. Not even close. (I do believe it will be... in the future. MUCH farther in the future than your story.)
If I were going to try to convince someone that their seemingly smart AI who beat humans in Jeopardy was not really intelligent, I'd use examples like that. Just because something has unexpected behavior doesn't make it intelligent nor sentient. That's the kind of thing your characters should hammer home for any customers that make claims about the humanity of their androids.
As far as literature, if you haven't seen the TV series Westworld, you should definitely check it out. (I haven't read the book, but the TV series is much more recent and has a modern view.)
Also, this xkcd:
https://what-if.xkcd.com/5/
I hope that helps!
