Now we can instantiate our model object and generate chat completions.
The default model (i.e. meta-llama-3.3-70b-instruct) will be used if no model is specified.
from llama_index.core.llms import ChatMessage, MessageRolemessage = ChatMessage(role=MessageRole.USER, content="Tell me a joke.")resp = llm.chat([message])print(resp)