top of page
Search

Using LLaMa with VSCode

  • Writer: Joseph
    Joseph
  • Jul 6, 2024
  • 1 min read

Download and install Ollama. There are multiple LLMs available for Ollama. In this case, we will be using Codellama, which can use text prompts to generate and discuss code. Once Ollama is installed download the Codellama model

ollama pull codellama

Recheck if the model is available locally

ollama list

Run Codellama

ollama run codellama

Test the model

ree

ModelFile


A model file is the blueprint to create and share models with Ollama.

FROM codellama

# sets the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# sets the context window size to 1500, this controls how many tokens the LLM can use as context to generate the next token
PARAMETER num_ctx 1500

# sets a custom system message to specify the behavior of the chat assistant
SYSTEM You are expert Code Assistant

Activate the new configuration

ollama create codegpt-codellama -f Modfile
ree

Check if the new configuration is listed

ollama list
ree

Test the new configuration

ollama run codegpt-codellama
ree

CodeGPT Extension


Install the codeGPT extension in VSCode.

ree

Then select Ollama from the dropdown menu

ree

and select the configuration we created

ree

Generate Code


ree



 
 
 

Recent Posts

See All
Setting up Prometheus and Grafana in an AWS Cluster

This is the seventh part of an eight-part series  on how to setup an HPC cluster on AWS. This document explains how to set up an OpenPBS job scheduler in an AWS cluster. The cluster has seven virtual

 
 
 
Setting up an LDAP System in an AWS Cluster

This is the sixth part of an eight- part series  on how to set up an HPC cluster on AWS. This document explains how to setup LDAP in the cluster The cluster has seven virtual machines (VMs) One head /

 
 
 

Comments


bottom of page