A2A demo by Emmanuel Bernard
| import com.google.adk.agents.BaseAgent; | |
| import com.google.adk.agents.LlmAgent; | |
| import com.google.adk.tools.BaseTool; | |
| import com.google.adk.tools.mcp.McpToolset; | |
| import com.google.adk.tools.mcp.SseServerParameters; | |
| import com.google.adk.tools.mcp.StreamableHttpServerParameters; | |
| import com.google.adk.web.AdkWebServer; | |
| import java.util.List; |
| ## Google: Market Research Report (Q2 2025) | |
| Executive Summary: | |
| This report provides an overview of Google, its recent performance, | |
| and current market position. Google, a subsidiary of Alphabet Inc., | |
| continues to be a dominant force in the technology industry, driven | |
| by its core search engine, cloud services, and advancements in | |
| artificial intelligence. Recent financial results for Q2 2025 | |
| exceeded expectations, demonstrating strong growth across key |
| var companyProfiler = LlmAgent.builder() | |
| .name("company-profiler") | |
| .description( | |
| "Provides a general overview of a company.") | |
| .instruction(""" | |
| Your role is to provide a brief overview of the | |
| given company. | |
| Include its mission, headquarters, and current CEO. | |
| Use the Google Search Tool to find this information. | |
| """) |
| package agents; | |
| import static java.nio.charset.StandardCharsets.UTF_8; | |
| import java.util.Scanner; | |
| import com.google.adk.agents.BaseAgent; | |
| import com.google.adk.agents.LlmAgent; | |
| import com.google.adk.events.Event; | |
| import com.google.adk.runner.InMemoryRunner; | |
| import com.google.adk.sessions.Session; |
| import dev.langchain4j.agent.tool.ToolExecutionRequest; | |
| import dev.langchain4j.mcp.McpToolProvider; | |
| import dev.langchain4j.mcp.client.DefaultMcpClient; | |
| import dev.langchain4j.mcp.client.McpClient; | |
| import dev.langchain4j.mcp.client.transport.McpTransport; | |
| import dev.langchain4j.mcp.client.transport.http.HttpMcpTransport; | |
| import dev.langchain4j.model.vertexai.VertexAiGeminiChatModel; | |
| import dev.langchain4j.service.AiServices; | |
| import dev.langchain4j.service.SystemMessage; | |
| import dev.langchain4j.service.tool.ToolProvider; |
| public class SentenceWindowRetrieval { | |
| public static void main(String[] args) throws IOException { | |
| Document capitalDocument = Document.from("..."); | |
| VertexAiEmbeddingModel embeddingModel = VertexAiEmbeddingModel.builder() | |
| .project(System.getenv("GCP_PROJECT_ID")) | |
| .endpoint(System.getenv("GCP_VERTEXAI_ENDPOINT")) | |
| .location(System.getenv("GCP_LOCATION")) | |
| .publisher("google") |
Okay, here's a breakdown of how to create a new LangChain4j embedding store module for Google Cloud Firestore, along with the key steps and considerations, mirroring the structure of existing modules like langchain4j-milvus.
Project Structure
Your project structure should follow the established pattern. I'll create a simplified version based on the most relevant parts from the provided file listing. The full structure would be much larger (like the main langchain4j project), but this captures the essentials:
langchain4j/
└── langchain4j-embedding-store-google-firestore/ (or similar name)
├── pom.xml (Your module's Maven build file)
Okay, let's outline the steps to create a new language model module for LangChain4j and propose its inclusion. Based on the provided file structure, you'll be focusing on creating a new module similar to the existing ones (e.g., langchain4j-open-ai, langchain4j-ollama, etc.). Here's a breakdown of the process, referencing the structure you've provided:
Key Steps and Considerations
-
Understand the Abstractions and SPI: LangChain4j, like its Python counterpart, is built around core abstractions. You need to understand these to implement your integration correctly. The core abstractions you must implement are:
ChatLanguageModel/StreamingChatLanguageModel: For conversational models (like ChatGPT, Gemini). ImplementChatLanguageModelfor synchronous responses, andStreamingChatLanguageModelif the model supports streaming responses token by token.
LanguageModel/StreamingLanguageModel: For models with a simpler text-in, text-out interface (less com
| import com.google.gson.Gson; | |
| import dev.langchain4j.data.message.*; | |
| import dev.langchain4j.model.output.Response; | |
| import dev.langchain4j.model.vertexai.SchemaHelper; | |
| import dev.langchain4j.model.vertexai.VertexAiGeminiChatModel; | |
| import java.io.IOException; | |
| import java.nio.file.FileVisitOption; | |
| import java.nio.file.Files; | |
| import java.nio.file.Path; |