llama
Llama generation module.
Classes:
-
LlamaGen
–Llama Generation class.
-
OllamaGen
–Ollama Generation class for local inference via ollama-python.
-
OllamaOpenAIGen
–OllamaGen via the Ollama Python client.
LlamaGen
LlamaGen(
model_name: Optional[str] = None,
temperature: Optional[float] = None,
prompt_template: str = '',
output_max_length: int = 500,
device: str = 'auto',
structured_output: Optional[Type[BaseModel]] = None,
system_message: str = '',
api_params: dict[str, Any] = DEFAULT_API_PARAMS,
api_key: str = '',
cache: Optional[Cache] = None,
logs: dict[str, Any] = DEFAULT_LOGS,
)
Bases: GenerationBase
Llama Generation class.
Methods:
-
generate
–Generate text using Llama model with language support.
Source code in src/rago/generation/base.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
|
generate
Generate text using Llama model with language support.
Source code in src/rago/generation/llama.py
86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
|
OllamaGen
OllamaGen(
model_name: Optional[str] = None,
temperature: Optional[float] = None,
prompt_template: str = '',
output_max_length: int = 500,
device: str = 'auto',
structured_output: Optional[Type[BaseModel]] = None,
system_message: str = '',
api_params: dict[str, Any] = DEFAULT_API_PARAMS,
api_key: str = '',
cache: Optional[Cache] = None,
logs: dict[str, Any] = DEFAULT_LOGS,
)
Bases: GenerationBase
Ollama Generation class for local inference via ollama-python.
Methods:
-
generate
–Generate text by sending a prompt to the local Ollama model.
Source code in src/rago/generation/base.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
|
generate
Generate text by sending a prompt to the local Ollama model.
Parameters:
Returns:
-
str
–The generated response text.
Source code in src/rago/generation/llama.py
150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 |
|
OllamaOpenAIGen
OllamaOpenAIGen(
model_name: Optional[str] = None,
temperature: Optional[float] = None,
prompt_template: str = '',
output_max_length: int = 500,
device: str = 'auto',
structured_output: Optional[Type[BaseModel]] = None,
system_message: str = '',
api_params: dict[str, Any] = DEFAULT_API_PARAMS,
api_key: str = '',
cache: Optional[Cache] = None,
logs: dict[str, Any] = DEFAULT_LOGS,
)
Bases: OpenAIGen
OllamaGen via the Ollama Python client.
Methods:
-
generate
–Generate text using OpenAI's API with dynamic model support.
Source code in src/rago/generation/base.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
|
generate
Generate text using OpenAI's API with dynamic model support.
Source code in src/rago/generation/openai.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
|