There is some "it depends" in this. For example, how much data you get in the queries, how often you run the report, your persistence provider, and so on.
I suppose you have the value streams under control, i.e. you move out old data and manage the size so they don't grow endlessly.
You say there are currently no performance issues, so that's great. Question is will it keep being so in future. Are you expecting more users to use ThingWorx (or the report)? Are you planning to store data for more machines, properties or for longer? These are questions to ask if you consider the scalability of your approach.
From an architectural standpoint you create load in two systems, in the DB and in TWX.
The upside of one big QNPH is that the DB would only find and get the rows once, so in general I would expect this to be a tad faster than running n queries. The downside though is that you have to move a lot more data in a single action over to TWX, it will need a lot of memory at once, and the individual transaction length will be longer. This could come bite you later, when you might run into query (or service execution) timeouts.
Individual queries though will (per query) request less data, transactions will be smaller and less memory is used since you sliced the dragon and now you handle the slices sequentially, allowing garbage collection to free allocated memory if needed. The downside is that the database has to re-get the data every time. There might be some caching, but there is some base cost you have to pay over and over again.
30 QNPH calls to get all your properties is a lot, so I see why this got you thinking. I am leaning towards saying it is the more scalable approach though, but it's really a question on how much rows you request, how often this is done, and how much is filtered away.
Make sure you use the start and end date properties, eventually the tags in your query as this will enable the query to use the DB indexes, making the queries faster. Remember filter queries are NOT part of the where clause but are executed on the client, i.e. on your TWX.
(Caveat: I had SQL databases in mind when i wrote this, not sure how well this translates to Influx)
If you have doubts about the overall load, and past data does not change, you could think about storing the pre-computed data, so the next report will consume less resources. There are several possible approaches but they would be specific to your use case.