mirror of
https://github.com/lyx0/ollama-twitch-bot.git
synced 2024-11-06 18:52:03 +01:00
update README.md
This commit is contained in:
parent
cbd350fce7
commit
69140e9c5c
1 changed files with 9 additions and 8 deletions
17
README.md
17
README.md
|
@ -3,15 +3,16 @@
|
|||
Twitch chat bot that interacts with ollama. Work in Progress.
|
||||
|
||||
## Requirements:
|
||||
Go
|
||||
[Golang](https://go.dev/)
|
||||
|
||||
[Ollama.com](https://ollama.com)
|
||||
|
||||
## Build and run:
|
||||
1. Change the default values in the provided `env.example` and rename it to `.env`.
|
||||
2. Make sure ollama is up and running on the host.
|
||||
3. With docker compose:
|
||||
* `$ make up`
|
||||
3. Without docker:
|
||||
* `$ make build && make run`
|
||||
4. Join the Twitch channel you chose and use `()gpt <cool query>`
|
||||
1. Change the example values in the provided `env.example` and rename it to `.env`.
|
||||
2. Make sure ollama is running on the host and reachable at `localhost:11434` and the model that you specified in the `.env` file is already downloaded and ready to go. (Can be checked with e.g. `ollama run wizard-vicuna-uncensored`)
|
||||
3. Run:
|
||||
- With docker compose (might need sudo infront if you haven't setup rootless):
|
||||
- `$ make up`
|
||||
- Without docker:
|
||||
- `$ make build && make run`
|
||||
4. Join the Twitch channels you chose and type `()gpt <cool query>` and hopefully get a response.
|
||||
|
|
Loading…
Reference in a new issue