Best Janitor AI Kobold Settings for Speed
Many users try to use Kobold AI with Janitor AI to avoid paying for OpenAI API keys. However, Kobold often suffers from extremely slow generation times and connection timeouts.
Kobold AI Issues
- Requires running code on Colab
- Slow response (2-5 mins)
- Disconnects frequently
The Better Way
- Candy.ai (No Setup)
- Instant text generation
- No technical knowledge needed
Recommended Kobold Presets
If you insist on using Kobold, try these settings to reduce lag:
- Preset: Godlike or Erebus
- Context Size: Lower to 2048 (Higher = Slower)
- Temperature: 0.7
- Repetition Penalty: 1.1
Stop Waiting for Text to Load
Configuring endpoints and Google Colab links is a headache. Why not use a platform that hosts the models for you?
Candy.ai runs on dedicated high-speed servers. You just log in and chat. No API URL, no Kobold, no waiting.