
DATA.GOV.HK
Provides access to Hong Kong government's official open data portal, allowing you to search, browse, and retrieve metadata for thousands of public datasets from DATA.GOV.HK.
Integrates with DATA.GOV.HK to provide comprehensive access to Hong Kong government datasets through search, filtering, and metadata retrieval tools for researchers, developers, and data scientists working with official Hong Kong public sector data.
What it does
- Search datasets by keywords and metadata
- Browse datasets by category and format
- Retrieve detailed dataset information and metadata
- List available data categories and formats
- Filter datasets by file format (CSV, JSON, GeoJSON, etc.)
- Get faceted search results for data exploration
Best for
About DATA.GOV.HK
DATA.GOV.HK is a community-built MCP server published by mcp-open-data-hk that provides AI assistants with tools and capabilities via the Model Context Protocol. Access Hong Kong government datasets with DATA.GOV.HK for easy search, filtering, and metadata tools. Ideal for research It is categorized under analytics data. This server exposes 8 tools that AI clients can invoke during conversations and coding sessions.
How to install
You can install DATA.GOV.HK in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.
License
DATA.GOV.HK is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.
Tools (8)
Get a list of dataset IDs from data.gov.hk Args: limit: Maximum number of datasets to return (default: 1000) offset: Offset of the first dataset to return language: Language code (en, tc, sc)
Get detailed information about a specific dataset Args: dataset_id: The ID or name of the dataset to retrieve language: Language code (en, tc, sc) include_tracking: Add tracking information to dataset and resources
Get a list of data categories (groups) Args: order_by: Field to sort by ('name' or 'packages') - deprecated, use sort instead sort: Sorting of results ('name asc', 'package_count desc', etc.) limit: Maximum number of categories to return offset: Offset for pagination all_fields: Return full group dictionaries instead of just names language: Language code (en, tc, sc)
Get detailed information about a specific category (group) Args: category_id: The ID or name of the category to retrieve include_datasets: Include a truncated list of the category's datasets include_dataset_count: Include the full package count include_extras: Include the category's extra fields include_users: Include the category's users include_groups: Include the category's sub groups include_tags: Include the category's tags include_followers: Include the category's number of followers language: Language code (en, tc, sc)
Search for datasets by query term using the package_search API. This function searches across dataset titles, descriptions, and other metadata to find datasets matching the query term. Args: query: The solr query string (e.g., "transport", "weather", "*:*" for all) limit: Maximum number of datasets to return (default: 10, max: 1000) offset: Offset for pagination language: Language code (en, tc, sc) Returns: A dictionary containing: - count: Total number of matching datasets - results: List of matching datasets (up to limit) - has_more: Boolean indicating if there are more results available
mcp-open-data-hk
This is an MCP (Model Context Protocol) server that provides access to data from DATA.GOV.HK, the official open data portal of the Hong Kong government.
Installation
Installing via Smithery
To install mcp-open-data-hk for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @mcp-open-data-hk/mcp-open-data-hk --client claude
Using uv (recommended)
When using uv no specific installation is needed. We will
use uvx to directly run mcp-server-fetch.
Using PIP
Alternatively you can install mcp-server-fetch via pip:
pip install mcp-open-data-hk
After installation, you can run it as a script using:
python -m mcp_open_data_hk
After installation, configure your MCP-compatible client (like Cursor, Claude Code, or Claude Desktop) by adding the following to your settings.json:
Using uvx
{
"mcpServers": {
"mcp-open-data-hk": {
"command": "uvx",
"args": ["mcp-open-data-hk"]
}
}
}
Using pip installation
{
"mcpServers": {
"mcp-open-data-hk": {
"command": "python",
"args": ["-m", "mcp_open_data_hk"]
}
}
}
Features
The server provides the following tools to interact with the DATA.GOV.HK API:
list_datasets- Get a list of dataset IDsget_dataset_details- Get detailed information about a specific datasetlist_categories- Get a list of data categoriesget_category_details- Get detailed information about a specific categorysearch_datasets- Search for datasets by query term with advanced optionssearch_datasets_with_facets- Search datasets and return faceted resultsget_datasets_by_format- Get datasets by file formatget_supported_formats- Get list of supported file formats
Tools
list_datasets
Get a list of dataset IDs from DATA.GOV.HK
Parameters:
limit(optional): Maximum number of datasets to return (default: 1000)offset(optional): Offset of the first dataset to returnlanguage(optional): Language code (en, tc, sc) - defaults to "en"
get_dataset_details
Get detailed information about a specific dataset
Parameters:
dataset_id: The ID or name of the dataset to retrievelanguage(optional): Language code (en, tc, sc) - defaults to "en"include_tracking(optional): Add tracking information to dataset and resources - defaults to False
list_categories
Get a list of data categories (groups)
Parameters:
order_by(optional): Field to sort by ('name' or 'packages') - deprecated, use sort insteadsort(optional): Sorting of results ('name asc', 'package_count desc', etc.) - defaults to "title asc"limit(optional): Maximum number of categories to returnoffset(optional): Offset for paginationall_fields(optional): Return full group dictionaries instead of just names - defaults to Falselanguage(optional): Language code (en, tc, sc) - defaults to "en"
get_category_details
Get detailed information about a specific category (group)
Parameters:
category_id: The ID or name of the category to retrieveinclude_datasets(optional): Include a truncated list of the category's datasets - defaults to Falseinclude_dataset_count(optional): Include the full package count - defaults to Trueinclude_extras(optional): Include the category's extra fields - defaults to Trueinclude_users(optional): Include the category's users - defaults to Trueinclude_groups(optional): Include the category's sub groups - defaults to Trueinclude_tags(optional): Include the category's tags - defaults to Trueinclude_followers(optional): Include the category's number of followers - defaults to Truelanguage(optional): Language code (en, tc, sc) - defaults to "en"
search_datasets
Search for datasets by query term using the package_search API.
This function searches across dataset titles, descriptions, and other metadata to find datasets matching the query term. It supports advanced Solr search parameters.
Parameters:
query(optional): The solr query string (e.g., "transport", "weather", ":" for all) - defaults to ":"limit(optional): Maximum number of datasets to return (default: 10, max: 1000)offset(optional): Offset for pagination - defaults to 0language(optional): Language code (en, tc, sc) - defaults to "en"
Returns: A dictionary containing:
count: Total number of matching datasetsresults: List of matching datasets (up to limit)search_facets: Faceted information about the resultshas_more: Boolean indicating if there are more results available
search_datasets_with_facets
Search for datasets and return faceted results for better data exploration.
This function is useful for exploring what types of data are available by showing counts of datasets grouped by tags, organizations, or other facets.
Parameters:
query(optional): The solr query string - defaults to ":"language(optional): Language code (en, tc, sc) - defaults to "en"
Returns: A dictionary containing:
count: Total number of matching datasetssearch_facets: Faceted information about the resultssample_results: First 3 matching datasets
get_datasets_by_format
Get datasets that have resources in a specific file format.
Parameters:
file_format: The file format to filter by (e.g., "CSV", "JSON", "GeoJSON")limit(optional): Maximum number of datasets to return - defaults to 10language(optional): Language code (en, tc, sc) - defaults to "en"
Returns: A dictionary containing:
count: Total number of matching datasetsresults: List of matching datasets
get_supported_formats
Get a list of file formats supported by DATA.GOV.HK
Returns: A list of supported file formats
Local Testing
Run test scripts:
python tests/test_client.py
python tests/debug_search.py
python tests/comprehensive_test.py
Run server directly:
python -m src.mcp_open_data_hk
Run unit tests:
pytest tests/
Understanding Path Configuration
When installed as a package, the server can be referenced by its module name rather than file path. This is more convenient for users as they don't need to specify full file paths.
Installed Package:
{
"mcpServers": {
"mcp-open-data-hk": {
"command": "python",
"args": ["-m", "mcp_open_data_hk"]
}
}
}
Local Development (file path approach):
{
"mcpServers": {
"mcp-open-data-hk": {
"command": "python",
"args": ["-m", "src.mcp_open_data_hk"],
"cwd": "/full/path/to/mcp-open-data-hk"
}
}
}
The package installation approach is recommended for end users, while the file path approach is useful for local development and testing.
Example Queries
Once installed, try these queries with your AI assistant:
- "List some datasets from the Hong Kong government data portal via mcp-open-data-hk mcp."
- "Find datasets related to transportation in Hong Kong. Use mcp-open-data-hk."
- "What categories of data are available on DATA.GOV.HK? Use mcp-open-data-hk."
- "Get details about the flight information dataset. Use mcp-open-data-hk."
- "Search for datasets about weather in Hong Kong. Use mcp-open-data-hk."
- "What file formats are supported by DATA.GOV.HK? Use mcp-open-data-hk."
- "Find CSV datasets about population Use mcp-open-data-hk."
- "Show me the most common tags in transport datasets Use mcp-open-data-hk."
The AI will automatically use the appropriate tools from your MCP server to fetch the requested information.
Troubleshooting
Common Issues
-
Module not found errors: Make sure you've installed the dependencies with
pip install -e .for local development, orpip install mcp-open-data-hkfor the published package. -
Path issues: Ensure the
cwdin your IDE configuration is the correct absolute path to the project root. -
Permission errors: On Unix systems, make sure the scripts have execute permissions:
chmod +x src/mcp_open_data_hk/__main__.py -
FastMCP not found: Install it with:
pip install fastmcp
Testing the Connection
If you're having issues, you can test the connection manually:
-
Run the server in one terminal:
python -m src.mcp_open_data_hk -
In another terminal, run the test client:
python tests/test_client.py
If this works, the issue is likely in the IDE configuration.
Extending the Server
You can extend the server by adding more tools in src/mcp_open_data_hk/server.py. Follow the existing patterns:
- Add a new function decorated with
@mcp.tool - Provide a clear docstring explaining the function and parameters
- Implement the functionality
- Test with the client
The server automatically exposes all functions decorated with @mcp.tool to MCP clients.
GitHub Workflows
This project includes GitHub Actions workflows for CI/CD:
- CI Workflow: Runs tests across multiple Python versions (3.10-3.12) on every push/PR to main branch
- Publish Workflow: Automatically builds and publishes to TestPyPI on every push to main, and to PyPI on version tags (v*.*.*)
- Code Quality Workflow: Checks code formatting and linting on every push/PR
- Release Workflow: Automatically creates GitHub releases when tags are pushed
Setup for Publishing (Trusted Publishing)
This project uses PyPI's Trusted Publishing which is more secure than using API t
README truncated. View full README on GitHub.
Alternatives
Related Skills
Browse all skillsTransform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating data reports, or building executive presentations.
Advanced content and topic research skill that analyzes trends across Google Analytics, Google Trends, Substack, Medium, Reddit, LinkedIn, X, blogs, podcasts, and YouTube to generate data-driven article outlines based on user intent analysis
Expert data scientist for advanced analytics, machine learning, and statistical modeling. Handles complex data analysis, predictive modeling, and business intelligence. Use PROACTIVELY for data analysis tasks, ML modeling, statistical analysis, and data-driven insights.
Analyze Google Analytics data, review website performance metrics, identify traffic patterns, and suggest data-driven improvements. Use when the user asks about analytics, website metrics, traffic analysis, conversion rates, user behavior, or performance optimization.
World-class data science skill for statistical modeling, experimentation, causal inference, and advanced analytics. Expertise in Python (NumPy, Pandas, Scikit-learn), R, SQL, statistical methods, A/B testing, time series, and business intelligence. Includes experiment design, feature engineering, model evaluation, and stakeholder communication. Use when designing experiments, building predictive models, performing causal analysis, or driving data-driven decisions.
Comprehensive backend development guide for Langfuse's Next.js 14/tRPC/Express/TypeScript monorepo. Use when creating tRPC routers, public API endpoints, BullMQ queue processors, services, or working with tRPC procedures, Next.js API routes, Prisma database access, ClickHouse analytics queries, Redis queues, OpenTelemetry instrumentation, Zod v4 validation, env.mjs configuration, tenant isolation patterns, or async patterns. Covers layered architecture (tRPC procedures → services, queue processors → services), dual database system (PostgreSQL + ClickHouse), projectId filtering for multi-tenant isolation, traceException error handling, observability patterns, and testing strategies (Jest for web, vitest for worker).