generation
RAG Generation package.
Modules:
-
base
–Base classes for generation.
-
gemini
–GeminiGen class for text generation using Google's Gemini model.
-
hugging_face
–Hugging Face classes for text generation.
-
llama
–Llama generation module.
-
openai
–OpenAI Generation Model class for flexible GPT-based text generation.
Classes:
-
GeminiGen
–Gemini generation model for text generation.
-
GenerationBase
–Generic Generation class.
-
HuggingFaceGen
–HuggingFaceGen.
-
LlamaGen
–Llama Generation class.
-
OpenAIGen
–OpenAI generation model for text generation.
GeminiGen
GeminiGen(
model_name: Optional[str] = None,
temperature: Optional[float] = None,
prompt_template: str = '',
output_max_length: int = 500,
device: str = 'auto',
structured_output: Optional[Type[BaseModel]] = None,
system_message: str = '',
api_params: dict[str, Any] = DEFAULT_API_PARAMS,
api_key: str = '',
cache: Optional[Cache] = None,
logs: dict[str, Any] = DEFAULT_LOGS,
)
Bases: GenerationBase
Gemini generation model for text generation.
Methods:
-
generate
–Generate text using Gemini model support.
Source code in src/rago/generation/base.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
generate
Generate text using Gemini model support.
Source code in src/rago/generation/gemini.py
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
|
GenerationBase
GenerationBase(
model_name: Optional[str] = None,
temperature: Optional[float] = None,
prompt_template: str = '',
output_max_length: int = 500,
device: str = 'auto',
structured_output: Optional[Type[BaseModel]] = None,
system_message: str = '',
api_params: dict[str, Any] = DEFAULT_API_PARAMS,
api_key: str = '',
cache: Optional[Cache] = None,
logs: dict[str, Any] = DEFAULT_LOGS,
)
Bases: RagoBase
Generic Generation class.
Methods:
-
generate
–Generate text with optional language parameter.
Source code in src/rago/generation/base.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
generate
abstractmethod
Generate text with optional language parameter.
Parameters:
-
query
(str
) –The input query or prompt.
-
context
(list[str]
) –Additional context information for the generation.
Returns:
-
str
–Generated text based on query and context.
Source code in src/rago/generation/base.py
112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
|
HuggingFaceGen
HuggingFaceGen(
model_name: Optional[str] = None,
temperature: Optional[float] = None,
prompt_template: str = '',
output_max_length: int = 500,
device: str = 'auto',
structured_output: Optional[Type[BaseModel]] = None,
system_message: str = '',
api_params: dict[str, Any] = DEFAULT_API_PARAMS,
api_key: str = '',
cache: Optional[Cache] = None,
logs: dict[str, Any] = DEFAULT_LOGS,
)
Bases: GenerationBase
HuggingFaceGen.
Methods:
-
generate
–Generate the text from the query and augmented context.
Source code in src/rago/generation/base.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
generate
Generate the text from the query and augmented context.
Source code in src/rago/generation/hugging_face.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
|
LlamaGen
LlamaGen(
model_name: Optional[str] = None,
temperature: Optional[float] = None,
prompt_template: str = '',
output_max_length: int = 500,
device: str = 'auto',
structured_output: Optional[Type[BaseModel]] = None,
system_message: str = '',
api_params: dict[str, Any] = DEFAULT_API_PARAMS,
api_key: str = '',
cache: Optional[Cache] = None,
logs: dict[str, Any] = DEFAULT_LOGS,
)
Bases: GenerationBase
Llama Generation class.
Methods:
-
generate
–Generate text using Llama model with language support.
Source code in src/rago/generation/base.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
generate
Generate text using Llama model with language support.
Source code in src/rago/generation/llama.py
63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
|
OpenAIGen
OpenAIGen(
model_name: Optional[str] = None,
temperature: Optional[float] = None,
prompt_template: str = '',
output_max_length: int = 500,
device: str = 'auto',
structured_output: Optional[Type[BaseModel]] = None,
system_message: str = '',
api_params: dict[str, Any] = DEFAULT_API_PARAMS,
api_key: str = '',
cache: Optional[Cache] = None,
logs: dict[str, Any] = DEFAULT_LOGS,
)
Bases: GenerationBase
OpenAI generation model for text generation.
Methods:
-
generate
–Generate text using OpenAI's API with dynamic model support.
Source code in src/rago/generation/base.py
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
generate
Generate text using OpenAI's API with dynamic model support.
Source code in src/rago/generation/openai.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
|