Robo-photos don’t work like that
Reading your piece in the Freelance on the march of the robo-photo, I think you're missing the real point. These are not montages at all. You won't find them in a reverse image search - not, at least, until they've actually been published online.
The "artificial intelligence" is generating an entirely new image, based on what it knows about images. It's not simply combining bits.
What it does is more like a potter who's been in a bunch of studios settling down to make her own pot.
Have you seen the site whichfaceisreal.com? It's very hard to score better than random. The fake faces there are new inventions, not amalgams.
Mike Holderness responds:
I tried a reverse image search because several of the examples I got did look very much like cartoon figures dropped into an actual single photo of trees.
On reconsideration, it's possible that the system does generate the trees leaf by leaf...
On a third hand, I'm not the only one to have the impression that on occasion some image-generation systems simply lift an entire image from their training set to use as a background. I am interested in a paper entitled Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models by Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping and Tom Goldstein in which they "identify cases where diffusion models, including the popular Stable Diffusion model, blatantly copy from their training data".
And see below... does the Dal-e-2 system have that good an "idea" what the National Gallery looks like that it can draw it from scratch? David Hoffman insists that it can and points out that the windows are subtly wrong.

I asked Dall-e-2 for one of David's classic photos: and the above is what it generated from the prompt "couple kissing in Trafalgar Square with a burning building in the background" . It's facing the wrong way!.