I expect many people refer to Siri as "she" because Siri defaults (or at least used to default) to a female voice, and there's a natural tendency to refer to other participants in a conversation with slightly more personalized pronouns, lest someone mistakenly assume you're referring to a lamp or couch or some other non-speaking object. I certainly don't think of Siri as a thinking AI. "She" is mostly convenient shorthand, in sentences where "it" may be referring to something else (the restaurant you want directions to or the light you want to turn off).
I also don't have unreasonable expectations of what Siri can do. I generally limit my requests to setting HomeKit scenes, setting timers, adding reminders, creating appointments (I'll let Siri handle the date, time, and title, because that's quick and gets it recorded while it's fresh in my mind, and then go fill in the details on another device), and doing basic math ("what's 473 times 12 times 17 times 1.5") and unit conversions. Oh and asking for directions to some specific destination while driving. I limit myself to these requests because I have a fairly high degree of certainty that Siri will understand and respond correctly to them.
My biggest problem with Siri - and I don't think it's marketing, more like hubris on the part of the designers - is that they have been unwilling to design in any sort of specific, published, syntax, or ability to have some sort of meta conversation. In the former case, Siri tries to "make sense" of what you're saying, and often gets things wrong if you stray from their (unpublished and evolving) understood syntax, with the huge problem being that she rarely says, "Sorry, I don't understand what you mean" - she instead she makes huge assumptions that she knows what you mean, when she actually doesn't.
In the latter case, with no capability to have a conversation about the conversation - essentially, no editing mode - it means if I'm driving, and I ask for directions to a store, and she doesn't understand the name of the store, she will cheerfully start suggesting places that are clearly not what I want, and my only recourse is to try again, and probably get the same wrong result. There is no mechanism by which I can say, "Siri, you're misunderstanding the name of the destination, let me spell it for you: I K E A" (first name off the top of my head, not actually one she would misunderstand). I've had this happen with, say, a restaurant where the main word in the name sounds like some more common word, but if you search on the common word, you get a thousand matches. I've had occasions I've have to resort to giving the name of some store that I remember is a few blocks away from my actual destination, simply because Siri can recognize that store name.
And, maddeningly, if she doesn't get it right the first time, often she assumes that it's you who aren't sure where you want to go, so she starts adding details that are meant to be helpful in making your choice ("this one is 2.8 miles away, and gets 3 stars, and is open until 8pm, would you like to try that one?") - when you're driving, and what you REALLY want to know is whether to turn left or right at an intersection that's coming up soon, and the problem is that Siri didn't parse the name correctly, having Siri waste more and more time giving useless details, trying to "help you make up your mind", is infuriating.
I see this a lot in code - the presumption that if you get to a particular point in the code, it's because the user has made mistake X, and now it's time to explain to them their mistake as if they're five years old (when often the program ended up there for some entirely different reason the developer didn't think of). I would MUCH rather have a "personal assistant" that doesn't make assumptions - if she doesn't understand exactly what I mean, she should say, "sorry, I don't quite understand that - could you refer to my syntax manual and try again?".
But Apple's approach is to start out by pretending that Siri is fully conversant in English (or whatever language), and tell users to just ask questions, and then they try to handle whatever's thrown at Siri, and often fail badly - rather than making Siri fully understand/recognize a limited subset of English.
As a programmer, I've spent nearly my whole life dealing with rigid syntaxes, I'd be quite happy issuing verbal commands using specific syntax I'd looked up in a manual, rather than just having a sketchy "you could try things like..." list and having to guess.
And either publish specific recognized syntax for some new subject, or don't accept queries about that subject until you can be fully conversant in it. As just one example, Siri knows about all the HomeKit lights in my house. She knows them by name and where they are, and their current state. I can say, "turn on my bookcase light", and that works fine, as does "turn on my living room lights". If I say, "how many of my lights are on", she'll cheerfully say, "4 of your lights are on and 13 are off". But if I say, "which of my lights are on?" - and remember, she has all the information necessary to answer this question, including the names of every light - she will ALSO answer that with, "4 of your lights are on and 13 are off".
It's not a complicated request. It's not an obscure request (i.e. if you're already adding code to handle a count of the number of lights on/off, it's a fairly obvious next step to guess that someone might want to know which ones). Siri didn't mishear the word "which", and she understands the word in other contexts. Rather, they seem to have decided "this is a query about the state of the lights", and threw it to the same action on the back-end. The developers decided that answering the wrong question is "good enough". But if a human did this to you, you'd be annoyed with them. And with a human, you could explain the mistake to them and they'd likely get it right next time - obviously that doesn't work with Siri.
I would MUCH rather have Siri say, "I don't know how to answer that request" than have her pick a few words out of the sentence and assume she knows what you meant. And this gets back to, I'd would have been happier if Siri had debuted with a more limited and much more rigid syntax - you would have to phrase any given question/command in a very specific way, with the benefit of extremely high chances that Siri would correctly parse the request if you followed the template. And this would bring the benefit that if you issued a command that fit a particular request template, it would be pretty clear that the two words where the name of a HomeKit device is supposed to go must be the name of a HomeKit device. Siri would be given very strong contexts in which to interpret such names. I'm still gobsmacked that they thought it was a good idea to allow you to set a HomeKit scene by simply saying the name of the scene - it means that now any time you say anything to Siri, they first have to check to see if what you said is the name of a HomeKit scene before interpreting it in any other way. What if you name a scene using words like "play" or "set"? When you allow ambiguities like this, either you don't have access to a bunch of commands, or Siri has to start guessing which interpretation you meant. (If you say "Hey Siri, play time", are you asking for the right lighting scene for the kids to run around, or asking to play the song "Time" from Pink Floyd's Dark Side of the Moon?)
In addition to this, the unpublished syntax appears to be changing over time, constantly tinkered with on the back-end, but with no notice to the users about new or changed rules - for instance, I had many times where I'd say, "Hey Siri, play 'Accidental Tech Podcast'" (the bit in apostrophes is the literal name of the podcast), and Siri would say, "Playing Accidental Tech Podcast" and start playing the latest episode. And then there was a period of time where I'd give the same command and get back "There's no podcast named 'Accidental Tech'", and then some weeks later, the same command started working again.
If you took a job where one of the requirements was to know, say, French, and you said "yes, I know French", because you knew a smattering of French, and then your boss asked you a question in French and you gave a wildly incorrect answer because you only recognized two of the words and you just pretended to understand (sounds like a plot point for a sitcom episode)... they might consider firing you for lying about really knowing French.
Yet this is exactly the kind of thing that Siri does - they've coded her to pretend to be fully conversant in English (which I'm sure is their end goal), when in fact she only recognizes bits of it - and she doesn't say that she doesn't understand - frequently, instead, she fakes it, guessing that you probably mean something she does understand, and rushing off to do that thing. If an actual human personal assistant did this repeatedly, you'd get rid of them. I'm annoyed at the developers for taking this approach.
Don't put up a facade and try to backfill before anyone notices that it's fake. Instead, get it to be really good at recognizing a limited subset of the language - sentences/commands constructed in a particular way - and then slowly expand that syntax as time/resources permit - and publish the syntax ("Siri Syntax Guide v1.0", then 2.0, 3.0, etc.), so users know what to expect, rather than just encouraging them to ask in whatever format they feel like, hoping maybe Siri will understand.
The problem isn't users asking ridiculous questions, the problem is Apple encouraging users to just ask natural language questions, as if they were speaking to a real person. (Encouraging that would be fine if they had written something with Jarvis levels of comprehension of language and context, but they're nowhere near there yet.)
When I confirmed that Siri can only set alarms and reminders very well, I finally realized that it was all my fault for thinking that Siri could act like a real assistant, when in fact Siri is just an alarm clock manager.