AI are probably not immortal if you take immortality in the same sense as omnipotence. Because that kind of immortality would violate entropy laws. Over a long enough timeline, an AI will find a terminal end.
But in the sense of being not mortal as humans are, they basically are. Pintsize suggested as much. As long as they are serviced (and they can service themselves and others of their kinds) they won't "die." That's really all we know.
There are issue that spring to mind, such as limits on accessible memory, and thus a need to prune memory of time and thus the reaching a point where enough memory is off line that resulting personality is no longer the same person who started. Is that death? QC hasn't addressed that and doesn't seem likely to. But that's going pretty deep. As far as the basic question goes, there's nothing yet known, other than doing damage that will kill an AI. Even starving one, since we've seen several shutdowns.
I don't know if we've seen any weak AI (outside of google). But what we have seen of QC AI suggests that weak AI (or narrow AI) can't become strong AI. General (strong) AI was a deliberate creation. It took human intervention to make. So a chat bot or search engine isn't going to spontaneously develop into a thinking being with autonomy. It either was one to begin, or it wasn't. I think it would be interesting if there were AI in the QC verse agitating for the uplift of chatbots and IVRs of the world. A lot of google's infrastructure is so complex IRL that it probably wouldn't take much intervention to uplift it, in the QC verse, and it would have a different way of thinking.
(Notably, very large AI exist in the QC verse, and they are suggested to have somewhat alien thought modes, themselves). All of this suggests that if AI exist in the QC verse who aren't individuals, they aren't capable of becoming individuals without some outside intervention, so it's not necessary to program them not to be individuals.