llama
Llama generation module.
Classes:
-
LlamaGen
–Llama Generation class.
-
OllamaGen
–Ollama Generation class for local inference via ollama-python.
-
OllamaOpenAIGen
–OllamaGen via the Ollama Python client.
LlamaGen
LlamaGen(
model_name: Optional[str] = None,
temperature: Optional[float] = None,
prompt_template: str = '',
output_max_length: int = 500,
device: str = 'auto',
structured_output: Optional[Type[BaseModel]] = None,
system_message: str = '',
api_params: dict[str, Any] = DEFAULT_API_PARAMS,
api_key: str = '',
cache: Optional[Cache] = None,
logs: dict[str, Any] = DEFAULT_LOGS,
)
Bases: GenerationBase
Llama Generation class.
Methods:
-
generate
–Generate text using Llama model with language support.
Source code in src/rago/generation/base.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
generate
Generate text using Llama model with language support.
Source code in src/rago/generation/llama.py
71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
OllamaGen
OllamaGen(
model_name: Optional[str] = None,
temperature: Optional[float] = None,
prompt_template: str = '',
output_max_length: int = 500,
device: str = 'auto',
structured_output: Optional[Type[BaseModel]] = None,
system_message: str = '',
api_params: dict[str, Any] = DEFAULT_API_PARAMS,
api_key: str = '',
cache: Optional[Cache] = None,
logs: dict[str, Any] = DEFAULT_LOGS,
)
Bases: GenerationBase
Ollama Generation class for local inference via ollama-python.
Methods:
-
generate
–Generate text by sending a prompt to the local Ollama model.
Source code in src/rago/generation/base.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
generate
Generate text by sending a prompt to the local Ollama model.
Parameters:
Returns:
-
str
–The generated response text.
Source code in src/rago/generation/llama.py
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 |
|
OllamaOpenAIGen
OllamaOpenAIGen(
model_name: Optional[str] = None,
temperature: Optional[float] = None,
prompt_template: str = '',
output_max_length: int = 500,
device: str = 'auto',
structured_output: Optional[Type[BaseModel]] = None,
system_message: str = '',
api_params: dict[str, Any] = DEFAULT_API_PARAMS,
api_key: str = '',
cache: Optional[Cache] = None,
logs: dict[str, Any] = DEFAULT_LOGS,
)
Bases: OpenAIGen
OllamaGen via the Ollama Python client.
Methods:
-
generate
–Generate text using OpenAI's API with dynamic model support.
Source code in src/rago/generation/base.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
generate
Generate text using OpenAI's API with dynamic model support.
Source code in src/rago/generation/openai.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
|