Robots.txt File
Define how the CDN serves the robots.txt file to control how search engines crawl and index your content.
The Robots.txt File feature determines whether search engines receive a robots.txt file from the CDN and which source the file is taken from. This allows you to enable crawling for all files or defer crawling rules to your origin.
You can manage Robots.txt File settings in the Medianova Control Panel or via API.
Log in to the Medianova Control Panel, select a CDN resource in the CDN section, and navigate to the Caching tab.
Configure Robots.txt File
Choose how the CDN should provide the robots.txt file for the selected resource.
Select the Robots.txt File Mode
Choose Disabled, Enabled, or Origin from the dropdown.
Mode Behaviors
Disabled — No robots.txt file is served. Enabled — MN CDN serves its default robots.txt file, allowing crawlers to index all content. Origin — CDN fetches and serves the robots.txt file from your origin.

Enabled mode always serves MN CDN’s default robots.txt content. Customers cannot upload or configure a custom robots.txt file through the Panel.
Origin mode delegates full control to your origin server.
Changes propagate immediately but may take a short time to be reflected across all PoPs.
FAQ
Does Medianova allow uploading a custom robots.txt file? No. In Enabled mode, the CDN serves a predefined robots.txt file that allows all crawling. To use a custom file, select Origin mode and host the file on your origin.
What happens if my origin has no robots.txt file but I choose Origin mode?
The CDN will return 404 Not Found, and search engines will proceed as if no robots.txt file exists.
Does robots.txt affect CDN caching? No. It only controls search engine bot behavior and does not interact with cache rules.
Last updated
Was this helpful?