Hi everyone,
I am currently working on a use case where we have multiple repositories, and each path contains around 25 images, each with a file size of approximately 400 KB.
The customer would like to retrieve (“auslesen”) these images via an API from ThingWorx.
Before I implement the solution, I would like to get some best-practice recommendations from the community:
What is the best way to prepare and serve many images via API in ThingWorx?
Should we use:
LoadBinary for each image,
LoadImage,
SaveImage / Base64 output,
or another recommended approach?
Is it more efficient and possible to:
Return image streams directly,
Convert them to Base64,
Or provide download URLs (e.g., via Content Caching)?
Are there performance concerns when retrieving 25×400 KB = ~10 MB per request from a ThingWorx server?
Any recommended throttling, batching, or pagination strategies?
We want to provide the customer with a clean, fast, and scalable API that allows them to retrieve all images from a given folder path.
Looking for advice on optimal services, data structures, and performance considerations.
Thanks in advance for any suggestions or insights!
Solved! Go to Solution.
Hi @MA8731174
To effectively prepare and serve multiple images via an API in ThingWorx, consider the following best practices:
Image Retrieval Methods:
Performance Considerations:
Content Delivery:
Hope this information is helpful.
Regards.
--Sharon
Hi @MA8731174
To effectively prepare and serve multiple images via an API in ThingWorx, consider the following best practices:
Image Retrieval Methods:
Performance Considerations:
Content Delivery:
Hope this information is helpful.
Regards.
--Sharon
