heya! thats super werid! You can find a link to our discord server on the main page https://vtspog.com , feel free to make a thread on support there and we can give ya a hand over there to try to make it work. I'm positive we can figure it out if we check it out together ♥
heya! you need to download the new version and just unzip the file and either replace the folder or delete the old one and just use the new one. VTS P.O.G. saves configuration on windows appdata so configs carry over between versions, they are not saved on the vtspog folder..
little experience myself in mac development so I wouldnt even know where to start with that, sadly a bit outside of my knowledge as a developer. I'm planning on getting a mac around February (hopefully) to adapt vtspog for mac cause I don't even have one myself and are a bit hard/expensive to get where I live (Argentina)
not a linux user myself, so low experience in adapting it to it BUT one of our users made a guide on how to run it on linux. I wasnt able to try it myself, but seems to be working after many hoops! Let me link the video they made going through their setup here
not atm, but ollama support is in the works, so far the most stable option for local llm I have found. If you know coding, you can also send generated responses to vtspog through our local api, but the setup may be a bit complex unless you are confident in your python (given most local llm model solutions run in python)
for the openai stuff why just make the b ase url customizable so we can use lmstudios server
# Chat with an intelligent assistant in your terminalfrom openai import OpenAI # Point to the local server client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio") history = [ {"role": "system", "content": "You are an intelligent assistant. You always provide well-reasoned answers that are both correct and helpful."}, {"role": "user", "content": "Hello, introduce yourself to someone opening this program for the first time. Be concise."}, ] whileTrue: completion = client.chat.completions.create( model="model-identifier", messages=history, temperature=0.7, stream=True, ) new_message = {"role": "assistant", "content": ""} for chunk in completion: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="", flush=True) new_message["content"] += chunk.choices[0].delta.content history.append(new_message) # Uncomment to see chat history# import json# gray_color = "\033[90m"# reset_color = "\033[0m"# print(f"{gray_color}\n{'-'*20} History dump {'-'*20}\n")# print(json.dumps(history, indent=2))# print(f"\n{'-'*55}\n{reset_color}")print() history.append({"role": "user", "content": input("> ")})
if they use the openai api we the base url can be changed and can run a server with lmstudio that be dope if they can update and use xtts-api-server also
didnt know about lms studio or coqui so i would need to give em a check, but I'm always open to suggestions. Will try giving em a look once I have the time~
← Return to vtube studio plugin
Comments
Log in with itch.io to leave a comment.
I can't get VTS pog to start up
heya! thats super werid! You can find a link to our discord server on the main page https://vtspog.com , feel free to make a thread on support there and we can give ya a hand over there to try to make it work. I'm positive we can figure it out if we check it out together ♥
How do I update? I can't seem to find it anywhere
heya! you need to download the new version and just unzip the file and either replace the folder or delete the old one and just use the new one. VTS P.O.G. saves configuration on windows appdata so configs carry over between versions, they are not saved on the vtspog folder..
doesnt seem to run through crossover on mac, window with the program is just non existent with no plugin pop out in vts…
little experience myself in mac development so I wouldnt even know where to start with that, sadly a bit outside of my knowledge as a developer. I'm planning on getting a mac around February (hopefully) to adapt vtspog for mac cause I don't even have one myself and are a bit hard/expensive to get where I live (Argentina)
i would try looking at another vts plugins with mac support for reference
Looking forward to seeing a Linux version!
not a linux user myself, so low experience in adapting it to it BUT one of our users made a guide on how to run it on linux. I wasnt able to try it myself, but seems to be working after many hoops! Let me link the video they made going through their setup here
Thanks for the reply. I'm familiar with Lutris but I don't believe I had tried it for VTS-POG.
I'll give this a shot later and report my findings.
let me know how it goes ♥
Lutris+Proton-GE was the trick!
I was given a free copy to test with some months back that I couldn't get working.
Now that I have this running, I'm buying a copy.
Still would be great to see a native Linux version but this definitely helps!
This the best thing. Thankyou so much for your help and thankyou so much for making this
I want to buy it. But I want to know if can user local llm models
not atm, but ollama support is in the works, so far the most stable option for local llm I have found. If you know coding, you can also send generated responses to vtspog through our local api, but the setup may be a bit complex unless you are confident in your python (given most local llm model solutions run in python)
for the openai stuff why just make the b ase url customizable so we can use lmstudios server
also add support for coqui tts or add support to use use xtts-api-server https://github.com/daswer123/xtts-api-serverThanks I will be wait then ^^
if they use the openai api we the base url can be changed and can run a server with lmstudio that be dope if they can update and use xtts-api-server also
didnt know about lms studio or coqui so i would need to give em a check, but I'm always open to suggestions. Will try giving em a look once I have the time~
is it possible to change the gpt model? i was hoping to use the new 4o mini model
next release! its already available in a test build Im sharing for testing, hit me in the dms on the discord server and I dont mind sharing it ♥