I'm writing to express my enthusiasm for the ChatGPT plugin you've integrated. It's a fantastic addition that significantly enhances the platform's capabilities.
I believe there's a potential to further optimize the platform's performance and cost-efficiency by introducing the ability to integrate local Large Language Models (LLMs). Similar to the ChatGPT plugin, users could connect their own LLMs to the platform, allowing for more tailored and cost-effective solutions.
I'm confident that this feature would be well-received by the user community. It would provide greater flexibility and control over the language models used, while also reducing reliance on external APIs.
I would be happy to discuss this proposal further and explore the potential benefits and implementation details.
Thank you for your time and consideration.
New
0 votes
Vote
Reply
5 Answers
Aug 13 (5 months ago)
Ira Kobylianskaagentwrote
Thank you for your comment, Mihai Moisa
We consider adding other language models including custom in future versions. We encourage other users to vote for this feature to speed up the implementation.
Please provide us with any further details if available.
Aug 13 (5 months ago)
Mihai Moisawrote
I use LM Studio or Jan AI and I'd love to connect it to PSM. It would be a game-changer if the PSM team could develop this feature. It would allow everyone to easily connect their data to an LLM. Being able to specify API endpoints, addresses, and ports would also be a game-changer. And the ability to see what models are available would be the icing on the cake.
Aug 13 (5 months ago)
Mihai Moisawrote
Also in the next update please add the capability to run more tasks in the same time. It takes forever to run one function over 31k products
Nov 16 (59 days ago)
Mihai Moisawrote
Is it possible to add a feature that allows users to configure a local server address for local LLM instead of using the openai API? The local LLM is fully compatible with openai and only requires a change in the address and model. I hope this change can be easily implemented and included in the next update. Thank you.
Nov 25 (50 days ago)
Ira Kobylianskaagentwrote
Thank you for your comments, Mihai Moisa
The development process requires additional steps to be taken except the feature itself. How the process goes. We perform an investigation of the API / possible feature and make estimates on its implementation (dev. team). Also we add additional time on version check (QA team). We test new feature with all PrestaShop versions we support (starting from 1.6) including the testing of all fields that are related to this feature (specifically, fields that can be used by AI content generator). It adds up a lot. At the end, we get the time estimate for the feature.
After that, we check how many customers (paid customers) have requested this feature. In case only one customer has requested, it is postponed until we have more requests. Once we have multiple requests, we force this feature to the upcoming sprint. At the moment we have scheduled up to 3 sprints with the features that are blockers and/or were requested multiple times by customers. That is why this feature (the local LLM) has not added to the current sprint yet.
For example among the features that are coming soon is a feature to run bulk prompts. We had requests from multiple paid customers who have 5 - 7 languages and would like to generate product content for 3+ fields. That requires them to run ChatGPT 15 - 21 times. A bulk execution feature would improve the process greatly. Also, it would be helpful for multiple customers. Also, it is one step closer to scheduled bulk execution (schedule tasks bulk for new products only that was also requested multiple times already).
We value all requests and we truly believe that Local LLM is a good feature, but it was not upvoted or requested by other users yet. We will surely add it in future versions and contact you back for assistance.