Tools
Smolagents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change.
To learn more about agents and tools make sure to read the introductory guide. This page contains the API docs for the underlying classes.
Tools
load_tool
smolagents.load_tool
< source >( task_or_repo_id model_repo_id: typing.Optional[str] = None token: typing.Optional[str] = None trust_remote_code: bool = False **kwargs )
Parameters
- task_or_repo_id (
str
) — The task for which to load the tool or a repo ID of a tool on the Hub. Tasks implemented in Transformers are:"document_question_answering"
"image_question_answering"
"speech_to_text"
"text_to_speech"
"translation"
- model_repo_id (
str
, optional) — Use this argument to use a different model than the default one for the tool you selected. - token (
str
, optional) — The token to identify you on hf.co. If unset, will use the token generated when runninghuggingface-cli login
(stored in~/.huggingface
). - trust_remote_code (
bool
, optional, defaults to False) — This needs to be accepted in order to load a tool from Hub. - kwargs (additional keyword arguments, optional) —
Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as
cache_dir
,revision
,subfolder
) will be used when downloading the files for your tool, and the others will be passed along to its init.
Main function to quickly load a tool, be it on the Hub or in the Transformers library.
Loading a tool means that you’ll download the tool and execute it locally. ALWAYS inspect the tool you’re downloading before loading it within your runtime, as you would do when installing a package using pip/npm/apt.
tool
smolagents.tool
< source >( tool_function: typing.Callable )
Converts a function into an instance of a Tool subclass.
Tool
A base class for the functions used by the agent. Subclass this and implement the forward
method as well as the
following class attributes:
- description (
str
) — A short description of what your tool does, the inputs it expects and the output(s) it will return. For instance ‘This is a tool that downloads a file from aurl
. It takes theurl
as input, and returns the text contained in the file’. - name (
str
) — A performative name that will be used for your tool in the prompt to the agent. For instance"text-classifier"
or"image_generator"
. - inputs (
Dict[str, Dict[str, Union[str, type]]]
) — The dict of modalities expected for the inputs. It has onetype
key and adescription
key. This is used bylaunch_gradio_demo
or to make a nice space from your tool, and also can be used in the generated description for your tool. - output_type (
type
) — The type of the tool output. This is used bylaunch_gradio_demo
or to make a nice space from your tool, and also can be used in the generated description for your tool.
You can also override the method setup() if your tool has an expensive operation to perform before being usable (such as loading a model). setup() will be called the first time you use your tool, but not at instantiation.
Creates a Tool from a gradio tool.
from_hub
< source >( repo_id: str token: typing.Optional[str] = None trust_remote_code: bool = False **kwargs )
Parameters
- repo_id (
str
) — The name of the repo on the Hub where your tool is defined. - token (
str
, optional) — The token to identify you on hf.co. If unset, will use the token generated when runninghuggingface-cli login
(stored in~/.huggingface
). - trust_remote_code(
str
, optional, defaults to False) — This flags marks that you understand the risk of running remote code and that you trust this tool. If not setting this to True, loading the tool from Hub will fail. - kwargs (additional keyword arguments, optional) —
Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as
cache_dir
,revision
,subfolder
) will be used when downloading the files for your tool, and the others will be passed along to its init.
Loads a tool defined on the Hub.
Loading a tool from the Hub means that you’ll download the tool and execute it locally. ALWAYS inspect the tool you’re downloading before loading it within your runtime, as you would do when installing a package using pip/npm/apt.
Creates a Tool from a langchain tool.
from_space
< source >( space_id: str name: str description: str api_name: typing.Optional[str] = None token: typing.Optional[str] = None ) → Tool
Parameters
- space_id (
str
) — The id of the Space on the Hub. - name (
str
) — The name of the tool. - description (
str
) — The description of the tool. - api_name (
str
, optional) — The specific api_name to use, if the space has several tabs. If not precised, will default to the first available api. - token (
str
, optional) — Add your token to access private spaces or increase your GPU quotas.
Returns
The Space, as a tool.
Creates a Tool from a Space given its id on the Hub.
push_to_hub
< source >( repo_id: str commit_message: str = 'Upload tool' private: typing.Optional[bool] = None token: typing.Union[bool, str, NoneType] = None create_pr: bool = False )
Parameters
- repo_id (
str
) — The name of the repository you want to push your tool to. It should contain your organization name when pushing to a given organization. - commit_message (
str
, optional, defaults to"Upload tool"
) — Message to commit while pushing. - private (
bool
, optional) — Whether to make the repo private. IfNone
(default), the repo will be public unless the organization’s default is private. This value is ignored if the repo already exists. - token (
bool
orstr
, optional) — The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when runninghuggingface-cli login
(stored in~/.huggingface
). - create_pr (
bool
, optional, defaults toFalse
) — Whether or not to create a PR with the uploaded files or directly commit.
Upload the tool to the Hub.
For this method to work properly, your tool must have been defined in a separate module (not __main__
).
save
< source >( output_dir )
Saves the relevant code files for your tool so it can be pushed to the Hub. This will copy the code of your
tool in output_dir
as well as autogenerate:
- a
tool.py
file containing the logic for your tool. - an
app.py
file providing an UI for your tool when it is exported to a Space withtool.push_to_hub()
- a
requirements.txt
containing the names of the module used by your tool (as detected when inspecting its code)
Overwrite this method here for any operation that is expensive and needs to be executed before you start using your tool. Such as loading a big model.
Toolbox
class smolagents.Toolbox
< source >( tools: typing.List[smolagents.tools.Tool] add_base_tools: bool = False )
The toolbox contains all tools that the agent can perform operations with, as well as a few methods to manage them.
Adds a tool to the toolbox
Clears the toolbox
remove_tool
< source >( tool_name: str )
Removes a tool from the toolbox
show_tool_descriptions
< source >( tool_description_template: typing.Optional[str] = None )
Returns the description of all tools in the toolbox
Updates a tool in the toolbox according to its name.
launch_gradio_demo
smolagents.launch_gradio_demo
< source >( tool: Tool )
Launches a gradio demo for a tool. The corresponding tool class needs to properly implement the class attributes
inputs
and output_type
.
ToolCollection
class smolagents.ToolCollection
< source >( collection_slug: str token: typing.Optional[str] = None )
Tool collections enable loading all Spaces from a collection in order to be added to the agent’s toolbox.
[!NOTE] Only Spaces will be fetched, so you can feel free to add models and datasets to your collection if you’d like for this collection to showcase them.
Example:
>>> from transformers import ToolCollection, CodeAgent
>>> image_tool_collection = ToolCollection(collection_slug="huggingface-tools/diffusion-tools-6630bb19a942c2306a2cdb6f")
>>> agent = CodeAgent(tools=[*image_tool_collection.tools], add_base_tools=True)
>>> agent.run("Please draw me a picture of rivers and lakes.")
Agent Types
Agents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return text, image, audio, video, among other types. In order to increase compatibility between tools, as well as to correctly render these returns in ipython (jupyter, colab, ipython notebooks, …), we implement wrapper classes around these types.
The wrapped objects should continue behaving as initially; a text object should still behave as a string, an image
object should still behave as a PIL.Image
.
These types have three specific purposes:
- Calling
to_raw
on the type should return the underlying object - Calling
to_string
on the type should return the object as a string: that can be the string in case of anAgentText
but will be the path of the serialized version of the object in other instances - Displaying it in an ipython kernel should display the object correctly
AgentText
Text type returned by the agent. Behaves as a string.
AgentImage
Image type returned by the agent. Behaves as a PIL.Image.
save
< source >( output_bytes format: str = None **params )
Saves the image to a file.
Returns the “raw” version of that object. In the case of an AgentImage, it is a PIL.Image.
Returns the stringified version of that object. In the case of an AgentImage, it is a path to the serialized version of the image.
AgentAudio
Audio type returned by the agent.
Returns the “raw” version of that object. It is a torch.Tensor
object.
Returns the stringified version of that object. In the case of an AgentAudio, it is a path to the serialized version of the audio.