delitepy.nimblenet package¶
Submodules¶
delitepy.nimblenet.eventstore module¶
- class delitepy.nimblenet.eventstore.Dataframe(eventSchema: Dict[str, str])¶
Bases:
object- __init__(eventSchema: Dict[str, str]) Dataframe¶
An object of Dataframe class can be used to interact with the on-device event store in which the events are being populated by frontend interactions of the user. Dataframe object can be used for the following 1. fetch events using filter_all or filter_by_function functions. 2. create a pre-processor using processor function.
Parameters¶
eventSchema : Dict[str, str]
Examples¶
>>> eventSchema = {"column1": "int32", "column2": "float", "column3": "string"} >>> df = Dataframe(eventSchema)
- append(event: dict) None¶
Add an event to the dataframe.
Parameters¶
- eventdict
Event to be added to the dataframe.
- filter_all() FilteredDataframe¶
Stores all the events present on the device for a specific eventStoreName (defined in event_store function) in the RawEventStore.
Returns¶
- filterEventStore: FilteredEventStore
Event Store will all the events stored.
- filter_by_function(filterFunction: LambdaType) FilteredDataframe¶
Filter the events stored on device by executing the function on all the events. Events which return on function execution will be stored in the RawEventStore.
Parameters¶
- filterFunctionfunction
Name of the user defined function by which event store needs to be filtered. the function should take as an argument a single event of the event store and return true or false based on whether it needs to be filtered in or out.
Returns¶
- filterEventStore: FilteredEventStore
Event Store will only the events which were filtered by the filterFunction.
- processor(dataType: str) IntermediateEventProcessor¶
Create an intermediate EventProcessor object, which is then be used to add processing and aggregration logic over the dataframe.
Parameters¶
- dataTypestr
Data type in which the outputs of the processed events are fetched.
- class delitepy.nimblenet.eventstore.EventProcessor¶
Bases:
objectAn object of EventProcessor class can be used to interact with the on-device event store in which the events are being populated by frontend interactions of the user.Using the EventProcessor object requires 2 steps. 1. Creating an object of EventProcessor using create_processor function and then defining the type of processing/aggregation you want to use. For defining a processor rollingWindow, groupBy, add_computation, create functions need to be chained. 2. Fetching the aggregated features from the processor using get_for_item and get functions.
- get(group: list | Tensor) Tensor¶
Returns the aggregrated features of the columns which were defined in groupByColumns function. Suppose the eventProcessor was created with groupBy of two columns lets say categoryId and brand, then to fetch the aggregated value of the group, this function will be called with group=[“Mobile”, “Samsung”].
Parameters¶
- grouplist|Tensor
List/Tensor of the group that you want to get the aggregrated value for.
Returns¶
- aggregratedFeaturesTensor
Aggregrated features output tensor.
- get_for_items(frontendJsonTensor: Tensor) Tensor¶
Returns the aggregrated features tensor. The data type of the tensor will be same as defined in create_processor function. Shape of the tensor will be [frontJsonTensor.length(), rollingWindow.length()].
Parameters¶
- frontendJsonTensorTensor
Its a tensor of json objects. The aggregated feature fetched will correspond to each of the json’s present in the tensor.
Returns¶
- aggregratedFeaturesTensor
Aggregrated features output tensor.
- class delitepy.nimblenet.eventstore.FilteredDataframe¶
Bases:
object- fetch(columnName: str, dtype: str) Tensor¶
Fetch the column values present in filtered events.
Parameters¶
- columnNamestr
Column for which the values need to be fetched.
- dtypestr
The data type in which the values are to be fetched.
Returns¶
- featureTensorTensor
Output tensor of shape n, where n is the number of filtered out events.
- class delitepy.nimblenet.eventstore.IntermediateEventProcessor¶
Bases:
object- add_computation(aggColumn: str, aggOp: str, defaultValue: int | float)¶
Define the aggregration logic to happen on the grouped events. Multiple aggregration logics can be added in the same EventProcessor.
Parameters¶
- aggColumnstr
Defines the column to aggregrate on
- aggOpstr
Defines the aggregate operation to do. Currently supported options are Avg, Sum, Count, Min and Max.
- defaultValueint|float
Defines the default value to return in case of no evetns to aggregrate on.
- create() EventProcessor¶
Create an object of EventProcessor class. Cannot add rolling_window, groupBy columns and computations after this function is called.
Returns¶
- eventProcessorEventProcessor
An object of EventProcessor class.
- groupBy(groupBycolumns: list[str] | Tensor | tuple[str])¶
Add the list of columns the processor should aggregate on.
Parameters¶
- groupByColumnslist[str] | Tensor | tuple[str]
The names of the columns you want the processor to aggregate on. The argument is a list/tensor so that we can support Group By on multiple
- class delitepy.nimblenet.eventstore.RawEventStore(eventType: str, eventExpiryType: str, eventExpiry: int)¶
Bases:
object- __init__(eventType: str, eventExpiryType: str, eventExpiry: int) RawEventStore¶
An object of RawEventStore class is used to define how the events of a particular type should be handled by the sdk. There are two ways it affects the behaviour of the events: 1. The constructor defines how the events should expire and when. 2. It can be used in the add_event hook to modify the event coming from frontend, like filtering them out on some condition or adding new fields in the event.
Parameters¶
- eventTypestr
EventType used by frontend when adding a new event.
- eventExpiryType: str
How the old events expire i.e. by time or count. It accepts two values “time” and “count”.
- eventExpiry: str
If eventExpiryType is “time” it defines the time in minutes after which events should be deleted. If eventExpiryType is “count” then it denotes the number of latest events which should be kept and rest all should be deleted.
delitepy.nimblenet.llm module¶
Type stubs for delitepy.nimblenet.llm module.
- class delitepy.nimblenet.llm.CharStream¶
Bases:
objectCharacter stream for streaming LLM responses.
- finished() bool¶
Check if the stream has finished generating content.
Returns¶
- bool
True if the stream has finished, False otherwise.
- next() str¶
Get the next string chunk from the stream.
Returns¶
- str
The next string chunk from the LLM response.
- skip_text_and_get_json_stream() JSONValueStream¶
Skip text content and get a JSON stream for parsing JSON data.
This method skips over any text content until it finds a valid JSON character, then returns a JSONValueStream for parsing the JSON object.
Returns¶
- JSONValueStream
A JSONValueStream for parsing the JSON content.
Examples¶
>>> char_stream = llm.prompt("Generate JSON: {"key": "value"}") >>> json_stream = char_stream.skip_text_and_get_json_stream() >>> value = json_stream.get_blocking_str("key")
- class delitepy.nimblenet.llm.JSONArrayIterator¶
Bases:
objectIterator for JSON array streams.
- next() JSONValueStream | None¶
Get the next element from the array iterator.
Returns¶
- Optional[JSONValueStream]
The next JSONValueStream element, or None if no more elements.
- next_available() bool¶
Check if the next element is available.
Returns¶
- bool
True if the next element is available, False otherwise.
- next_blocking() JSONValueStream | None¶
Get the next element from the array iterator, blocking until available.
Returns¶
- Optional[JSONValueStream]
The next JSONValueStream element, or None if no more elements.
- class delitepy.nimblenet.llm.JSONValueStream¶
Bases:
objectJSON value stream for parsing and accessing JSON data from character streams.
- finished() bool¶
Check if the JSON stream has finished parsing.
Returns¶
- bool
True if the stream has finished, False otherwise.
- get_blocking(key: str) JSONValueStream¶
Get a JSON value stream for a specific key, blocking until available.
Parameters¶
- keystr
The key to look up in the JSON object.
Returns¶
- JSONValueStream
A JSONValueStream for the specified key.
Raises¶
- UnsupportedError
If the stream is not an object type.
- KeyError
If the key is not found in the JSON object.
- get_blocking_str(key: str) str¶
Get a string value for a specific key, blocking until available.
Parameters¶
- keystr
The key to look up in the JSON object.
Returns¶
- str
The string value for the specified key.
Raises¶
- UnsupportedError
If the stream is not an object type.
- KeyError
If the key is not found in the JSON object.
- iterator() JSONArrayIterator¶
Get an iterator for array elements.
Returns¶
- JSONArrayIterator
A JSONArrayIterator for iterating over array elements.
Raises¶
- UnsupportedError
If the stream is not an array type.
- wait_for_completion() None¶
Wait for the JSON stream to complete parsing.
- class delitepy.nimblenet.llm.LLM(config: Dict[str, Any])¶
Bases:
objectLarge Language Model interface for text generation and conversation management.
- __init__(config: Dict[str, Any]) None¶
Initialize an LLM instance with the given configuration.
Parameters¶
- configDict[str, Any]
Configuration dictionary containing model parameters. Must include ‘name’ field specifying the model name. May include ‘provider’ and ‘metadata’ fields.
Examples¶
>>> llm = LLM({"name": "llama-3"}) >>> llm = LLM({"name": "gemini:nano:on-device", "provider": "os"}) >>> llmMetadata = { ... "endOfTurnToken": "\<\|eot_id\|\>", ... "maxTokensToGenerate": 2000, ... "tokenizerFileName": "tokenizer.bin", ... "temperature": 0.8 ... } >>> llm = LLM({"name": "llama-3", "metadata": llmMetadata})
- add_context(context: str) None¶
Add context to the LLM’s conversation history e.g. loading a past conversation.
Parameters¶
- contextstr
The context string to add to the conversation history. This can include system prompts, user messages, or assistant responses.
Examples¶
>>> llm.add_context("<|start_header_id|>system<|end_header_id|>You are a helpful assistant.<|eot_id|>")
- cancel() None¶
Cancel ongoing text generation.
Stops the current LLM generation process if one is running. This method can be called to interrupt a long-running generation.
- clear_context() None¶
Clear the LLM’s conversation history and reset the model context.
This method removes all previously added context and conversation history, effectively resetting the LLM to its initial state. This is useful when you want to start a fresh conversation or clear sensitive information from the model’s memory.
Examples¶
>>> llm.add_context("Previous conversation context...") >>> llm.prompt("Hello, how are you?") >>> llm.clear_context() # Reset to clean state >>> llm.prompt("What's the weather?") # Fresh conversation
- prompt(prompt: str) CharStream¶
Generate a response to the given prompt.
Parameters¶
- promptstr
The input prompt string to send to the LLM.
Returns¶
- CharStream
A CharStream object for streaming the LLM response.
Examples¶
>>> stream = llm.prompt("What is the capital of France?") >>> while not stream.finished(): ... chunk = stream.next() ... print(chunk)
delitepy.nimblenet.model module¶
- class delitepy.nimblenet.model.Model(modelName: str)¶
Bases:
objectModel class is used to interact with AI and ML models from your DelitePy workflow scripts. It can be used to perform different actions like loading the model from disk, checking its status and executing it.
- __init__(modelName: str) Model¶
Create a new model object. And provides instructions to the DeliteAI SDK to load the model and keep it ready for usage.
Parameters¶
- modelNamestr
Name of the ML model for the DeliteAI SDK to load.
- run(*args: Tensor) tuple[Tensor, ...]¶
Run the model to get inference output, given the inputs.
Parameters¶
- args*Tensor
Input tensors to the model in the order they are expected in the model.
Returns¶
- modelOutputtuple[Tensor, …]
Returns the output tensors of model as a tuple. The order of tensors is the same as defined during model construction.
delitepy.nimblenet.tensor module¶
- class delitepy.nimblenet.tensor.Tensor¶
Bases:
object- argsort(order: str)¶
List of indices of the sorted tensor based on the order specified. For argsort to work, the tensor should be 1 dimensional. Does not modify the existing tensor.
Parameters¶
- orderstr
Sorting order of tensor. Allowed values are “asc” and “desc”
Returns¶
- indicesTensor
List of indices of the tensor, based on sorting order.
- arrange(indices: list[int], Tensor)¶
Arranges the tensor based on the indices provided as the parameter. The number of elements in the indices list should be less than or equal to the number of elements in the tensor. Does not modify existing tensor. For arrange to work, the tensor should be 1 dimensional.
Parameters¶
- indiceslist[int] | Tensor
Indices in which the tensor should be arranged. Allowed types are either a list of ints or a tensor.
Returns¶
- arrangedTensorTensor
Returns the tensor with values arranged based on indices.
- reshape(newShape: list[int], Tensor)¶
Reshape the tensor to the provided newShape. Is is required for the newShape to be compatible with the shape of the original tensor i.e. the total number of elements in both the shapes shoould be same.
Parameters¶
- newShapelist[int] | Tensor
The new shape in which the existing tensor needs to be reshaped.
Returns¶
- reshapedTensorTensor
Returns the same tensor with the new shape.
- sort(order: str)¶
Sort the tensor based on order. For sort to work the tensor should be 1 dimensional.
Parameters¶
- orderstr
Order of sorting. Allowed values are “asc” and “desc”
Returns¶
- sortedTensorTensor
The same tensor in the sorted order.
- topk(num: int, order: str)¶
Return the indices of topk elements of the tensor are sorting based on order. For topk to work the tensor should be 1 dimensional. Does not modify the existing tensor.
Parameters¶
- numint
Number of indices to be returned.
- orderstr
Order of sorting. Allowed values are “asc” and “desc”
Returns¶
- topkIndicesTensor
Indices of the topk elements of the tensor, based on sorting order.
- delitepy.nimblenet.tensor.tensor(list: list[any], dtype: str) Tensor¶
Creates and return a tensor from the list provided and the desired data type of the tensor. The list should have all the elements of the same data types, as supported by dtype argument.
Parameters¶
- listlist[any]
List to be used to create the tensor.
- dtypestr
Data type with which to create the tensor.
Returns¶
- tensorTensor
Returns the tensor of the shape and datatype filled with values from the list.
- delitepy.nimblenet.tensor.zeros(shape: list[int], dtype: str) Tensor¶
Creates and return a tensor with zeroes of given shape and data type.
Parameters¶
- shapelist[int]
Desired shape of the tensor.
- dtypestr
Data type with which to create the tensor.
Returns¶
- tensorTensor
Returns the tensor of the shape and data type filled with zeros.
delitepy.nimblenet.utils module¶
- delitepy.nimblenet.utils.exp(x: int | float) float¶
Returns e raised to the power x, where e=2.718281… is the base of natural logarithms.
Parameters¶
x : int|float
Returns¶
- resultfloat
Result of e**x
- delitepy.nimblenet.utils.is_float(s: object) bool¶
Returns if the object is an float
Returns¶
- resultbool
true if object passed is an float, false otherwise
- delitepy.nimblenet.utils.is_integer(s: object) bool¶
Returns if the object is an integer
Returns¶
- resultbool
true if object passed is an integer, false otherwise
- delitepy.nimblenet.utils.is_string(s: object) bool¶
Returns if the object is an string
Returns¶
- resultbool
true if object passed is an string, false otherwise
- delitepy.nimblenet.utils.max(tensor: Tensor) int | float¶
Returns the maximum element in the tensor
Returns¶
- resultint|float
Maximum element in the tensor
- delitepy.nimblenet.utils.mean(tensor: Tensor) int | float¶
Returns the mean of all elements of the tensor
Returns¶
- resultint|float
Mean of all elements of the tensor
- delitepy.nimblenet.utils.min(tensor: Tensor) int | float¶
Returns the minimum element in the tensor
Returns¶
- resultint|float
Minimum element in the tensor
- delitepy.nimblenet.utils.parse_json(s: str) dict¶
Returns the string parsed as JSON
Returns¶
- resultdict
JSON object parsed from the string
- delitepy.nimblenet.utils.pow(base: int | float, exp: int | float) float¶
Returns base raised to the power exp. The arguments must have numeric type.
Parameters¶
base : int|float exp : int|float
Returns¶
- resultfloat
Result of base**exp.
Module contents¶
Package delitepy.nimblenet.
- delitepy.nimblenet.get_config() dict¶
Get the config defined during nimbleedge sdk initialization. Returns a dict with two keys cohortIds and compatibilityTag.
Returns¶
config : dict
Example¶
>>> config = nm.get_config() >>> tag = config["compatibilityTag"] # Will return the compatibilityTag
- delitepy.nimblenet.llm(config: Dict[str, Any]) LLM¶
Create an LLM instance with the given configuration.
This is a factory function that creates and returns an LLM instance.
Parameters¶
- configDict[str, Any]
Configuration dictionary containing model parameters. Must include ‘name’ field specifying the model name. May include ‘provider’ and ‘metadata’ fields.
Returns¶
- LLM
An LLM instance configured with the specified parameters.
Examples¶
>>> llm_instance = llm({"name": "llama-3"})