I run the llama3.2 model in terminal. I use it to debug the Ollama with JavaScript code, since Inhave some issues with accessing the service through the api address. The experience is similar to ChatGPT, although they both didn’t solve the problem. It gives me some general advice, and can also understand code. There are two main advantages of ChatGPT than this one to me. One is that ChatGPT has a more user-friendly interface. Since I run Ollama in terminal, I can’t switch lines or go back to edit the previous content. Also, the results wouldn’t be stored once I close the terminal.
Then, about the Ollama with JavaScript code. I copy the code from the class example. I tried both plain JavaScript code and the p5 one. I can run the code and see the interface, but they both reports the same error which says that the “http://localhost:11434/api/chat” not found. With the suggestion of llama3.2, I open the link directly in my browser. It also shows 404. Since the address begins with “localhost” so maybe I should start Ollama from the terminal. I use the Ollama serve command, and it gives me the following error. But I can still use the Ollama models in the terminal with the “run” command.
The idea of “LLM can only write ransom notes” makes me wonder that of large language models are better at writing poetry than other forms of text-base content, since the link between words in poems are weaker than other forms such as academic papers. It also reminds me of the project made by my classmate in another IMA class “AI for creates”. She use text to image AI to
I read the article that introduce the RNN. I am surprised that it’s learning in character level instead of words. It makes me thinking of the question that do AI “understand” the meaning of each words they composed from characters. Also, we have the models that can recognize objects from an image and paired each object with the label. In reverse, do they match the image when they are talking. For example, when we say “an apple”, we’ll have the image of apple in hour head. Does AI have a such “intuitive” process? Or will the content generated by AI be more closer to the one creates by human if AI have the visualization process when they “speaking”?