TextRazor API

Time Status Message
6 months ago No Reported Issues
6 months ago At around 8:45pm UTC the TextRazor API was returning HTTP 500 errors and timing out for some clients, with connectivity issues lasting 5-10 minutes. We have traced the issue to a poorly configured database update that was exacerbated by higher than normal loads. The problem has now been fixed.
6 months ago At around 8:45pm UTC TextRazor API was returning HTTP 500 errors and timing out for some clients. We've traced the problem to a database load issue, which has now been resolved. Request times are back to normal, we continue to monitor.
2 years, 1 month ago No Reported Issues
2 years, 1 month ago We've identified a bug in a recent API deployment that triggered a 1-2 minute service degradation at around 5pm UTC 29th August, and again at 11am UTC 30th August. During this time the API was slower than normal, and returned HTTP 500 errors. We're deploying a patch across our cluster and we do not expect further disruption.
2 years, 1 month ago TextRazor experienced a period of sporadic availability at around 10:58 UTC for around a minute. The system has recovered and is back to normal, we are investigating the root cause.
2 years, 7 months ago The TextRazor website is now available following downtime with our website host's network. The TextRazor API uses a separate infrastructure and was unaffected.
2 years, 7 months ago The TextRazor website and admin console is currently unavailable due to a connectivity issue at our host. The TextRazor API and all other services are operating as normal.
2 years, 7 months ago On Sat, 19 Feb 2022 at around 12:34 UTC the TextRazor API experienced 4-5 minutes of unavailability for some users after a bug caused widespread failure across our cluster. The system self-reset, minimising downtime, and we have rolled out a patch.
2 years, 7 months ago The TextRazor API was returning HTTP500 errors to some clients for around 3-4 minutes starting 12:34 UTC. The system has been running smoothly since then. We believe this was a bug triggered by a specific input, we continue to monitor and investigate the root cause.
2 years, 7 months ago We have detected a jump in HTTP500 errors being returned from the TextRazor API for some clients and are investigating.
2 years, 11 months ago No Reported Issues
2 years, 11 months ago We have identified a bug that is triggered by unexpected input - this caused increased latencies and HTTP 500 errors to TextRazor API requests for around 5 minutes at 19:05 UTC. We're currently rolling out a patch across our cluster.
2 years, 11 months ago Systems have returned to normal, we are continuing to investigate a bug in our routing system.
2 years, 11 months ago We are investigating higher than average processing latencies across our cluster.
4 years, 2 months ago No Reported Issues
4 years, 2 months ago At 14:30 UTC Today TextRazor was unavailable for approximately 3 minutes, and suffered from longer than usual latencies for several further minutes as it self repaired and caught up. We added extra capacity to mitigate the impact, and have traced the problem to a likely race condition with one of our dependencies, that caused cascading load issues across our cluster. We are pushing out a patch to fix the problem longer term.
4 years, 2 months ago Connectivity and analysis times have returned to normal, we continue to investigate the root cause.
4 years, 2 months ago Investigating connectivity issue with TextRazor API
4 years, 4 months ago No Reported Issues
4 years, 4 months ago Some clients have been experiencing SSL errors when connecting to the main TextRazor endpoint at https://api.textrazor.com today. A CA root certificate from Sectigo expired today (https://support.sectigo.com/articles/Knowledge/Sectigo-AddTrust-External-CA-Root-Expiring-May-30-2020). While most callers to TextRazor picked up the change, some failed with a SSL validation error. To mitigate the problem we have moved to a new SSL certificate on https://api.textrazor.com - please contact support@textrazor.com if you are still seeing any errors.
4 years, 6 months ago No Reported Issues
4 years, 6 months ago At around 15:19 UTC the TextRazor API suffered from degraded performance for a period of around 3 minutes. This was due to a classifier database hardware problem, with auto failover to a secondary node taking longer than expected. The system is now fully recovered, we will continue to monitor while investigating the root cause.
4 years, 6 months ago We are investigating increased latency across our network.
4 years, 12 months ago No Reported Issues
5 years ago Around midnight UTC TextRazor started returning HTTP 404 responses to a sample of requests in our North America region. This was traced to a database configuration issue on one of our router nodes which has been resolved, there are no further issues.
5 years, 5 months ago At around 18:30 UTC today TextRazor experienced some scale-up issues with an increase in load such that request latencies were higher than usual and some requests were timing out for a 5 minute period. The system has been running smoothly since, but we will continue to monitor.
5 years, 5 months ago Since approximately 18:30 UTC TextRazor API has been showing increased analysis latencies and timeouts for some customers, performance has improved as we increase capacity for those affected, but there is still currently slightly higher than normal latency. We are investigating.
5 years, 11 months ago No Reported Issues
5 years, 11 months ago At around 14:40 UTC TextRazor API responded with timeouts for a 10 second period. Following the incident, analysis latencies were higher than usual for around an hour, after which they settled back to normal levels. This was caused by a performance regression exacerbated by a heavier than usual load. We have increased our cluster size while we emergency patch the issue, there should not be any further problems going forward.
5 years, 11 months ago Since approximately 14:40 UTC TextRazor API has been showing increased analysis latencies and timeouts for some customers, performance has improved but is still currently slightly higher than normal latency. We are investigating.
5 years, 11 months ago No Reported Issues
5 years, 11 months ago We have rolled out an update, "url" analysis requests are now fully functional. We will continue to monitor the situation.
5 years, 11 months ago Due to an issue with our proxy network, TextRazor API is currently timing out for some users when analyzing "urls". There is no impact for requests containing "text" directly. We are currently deploying a fix.
6 years, 5 months ago Analysis latency is returning to normal as TextRazor's infrastructure took slightly longer than usual to scaleup to handle a burst in load.
6 years, 5 months ago TextRazor API analysis latencies are higher than usual right now, we are investigating the cause.
6 years, 5 months ago No reported issues
6 years, 5 months ago At around 17:05 UTC today TextRazor API was slower than usual for around 5 minutes due to a performance degradation caused by a bad user input across some of our servers. The issue has been resolved and we'll roll out a long term fix this week.
6 years, 5 months ago Monitoring system reports increased document analysis latency, we are investigating.
6 years, 7 months ago At 14:45 UTC the TextRazor API experienced higher than usual processing latency and timeouts for a period of around 5 minutes due to a networking issue. This has now been resolved.
7 years, 1 month ago No reported issues
7 years, 1 month ago Due to a Database configuration issue TextRazor classification functionality was unavailable for approximately 5 minutes starting 22:25 GMT. This has now been resolved and analysis latencies are back at normal levels.
7 years, 2 months ago No reported issues
7 years, 3 months ago Unexpected load on one of the TextRazor API maintenance endpoints caused cascading performance issues across all our frontend routing nodes. This has now been patched and request latencies are now back to normal levels.
7 years, 3 months ago Investigating connectivity issues with the TextRazor API
7 years, 6 months ago No reported issues
7 years, 6 months ago We have rolled out a patch to address the performance degradation we have seen over the last hour. We do not expect any further disruption, but will continue to monitor the situation.
7 years, 6 months ago An bug in a recent TextRazor release is triggering performance issues when trying to analyze an unexpected input. This has resulted in sporadic timeouts and errors over the past 30 minutes. We've partially backed out the change, and are working on a patch.
7 years, 6 months ago Investigating disruption to the TextRazor API
7 years, 6 months ago At approximately 15:10 GMT TextRazor experienced processing delays and timeouts for some users due to a performance regression in a new release. This was rolled back and service fully restored immediately, service disruption lasted around 2-3 minutes. No further issues.
7 years, 6 months ago We are investigating increased latency from the TextRazor API
7 years, 6 months ago No reported issues
7 years, 6 months ago Due to a network connectivity issue TextRazor experienced processing delays and timeouts for several minutes starting around 8pm GMT. Our hosts have fixed the problem, processing times have returned to normal. We are continuing to monitor.
7 years, 6 months ago We are investigating increased latencies and dropped requests from the TextRazor API.
7 years, 7 months ago Access to the account management page has been restored, no further issues.
7 years, 7 months ago Due to an outage with our website host Heroku and AWS, some users are unable to access their account page and downloads. The TextRazor API is unaffected. Apologies for any inconvenience, if you have any account queries please contact support@textrazor.com and we would be happy to assist.
7 years, 8 months ago No reported issues
7 years, 9 months ago TextRazor was returning an error for some clients at approximately 8:20 UTC for several minutes. This was caused by a runaway process on one of our router nodes. To mitigate any impact he affected node was immediately removed from service, and has now been restored.
7 years, 10 months ago Due to a misconfigured load balancer the TextRazor API was returning errors sporadically until 11AM GMT. This has been resolved.
7 years, 11 months ago No further issues
7 years, 11 months ago We are continuing to monitor the global DNS situation. The TextRazor site and admin panel has been sporadically unavailable for several hours due to an outage with our host. This has largely been resolved, some errors are still occurring as DNS changes propogate. The TextRazor API remains fully available.
7 years, 11 months ago The TextRazor website and admin panel is currently unavailable due to ongoing issues with DNS resolution with our hosting provider. There is currently no impact to the API, which operates on a completely separate infrastructure. Please contact support@textrazor.com if you have any questions about your account while this is resolved.
8 years, 5 months ago A fix for the issue causing yesterday's disruption has been deployed and running without problems for the last 24 hours.
8 years, 5 months ago At approximately 03:35 UTC Today TextRazor was returning error messages and experiencing increased latencies for 10 minutes. We believe this was due to a combination of heavy load in the system and a performance bottleneck in our scaling logic. We have taken measures to increase capacity to handle the load while we confirm and release an emergency fix. We do not expect any further impact, but will continue to monitor the situation.
8 years, 7 months ago The TextRazor API has returned to normal. We had a few minutes of partial downtime caused by connectivity problems between our EC2 frontend machines. The issue has now been resolved.
8 years, 7 months ago We are currently investigating increased error rates.
8 years, 11 months ago We have migrated back to our main data centre, all systems are back to normal.
8 years, 11 months ago We have now been running smoothly in a backup datacentre for 50 minutes. Increased analysis latency and timeouts were caused by a fibre cut that resulted in our host's (OVH) entire datacentre going offline. We started a full migration to an alternative host when the details emerged. Unfortunately the migration ramp-up took longer than expected, meaning TextRazor was seeing performance degredation for up to two hours. We continue to monitor the situation, and apologize for any inconvenience caused.
8 years, 11 months ago We have started seeing increased latency and timeouts again - our host is still repairing their connection. In the mean time we are bringing up extra capacity at an alternative location.
8 years, 11 months ago Connectivity problems are ongoing due to a partial datacentre outage at our host. This has been traced to a cut fibre cable. We are continuing to monitor.
8 years, 11 months ago We are continuing to experience higher than usual request times, due to an issue with our host's datacentre.
8 years, 11 months ago We are investigating higher than usual request processing times.
8 years, 11 months ago At 14:31 UTC today TextRazor was returning 500 - Internal Server Error to a large proportion of requests for approximately 2 minutes. This was caused by a networking issue with our upstream provider, which has now been resolved. We do not expect any further disruption, but will continue to monitor.
8 years, 11 months ago At 15:31 today the TextRazor API was returning errors for all calls for several minutes. The system is currently back to normal, we are investigating the root cause.
9 years, 1 month ago No Reported Issues
9 years, 3 months ago TextRazor experienced higher than usual latencies and sporadic timeouts between 7:30 AM and 8:00 AM UTC Today. This was caused by a burst in load that our auto-scaling infrastructure took longer than usual to keep up with. The system has now returned to normal.
9 years, 4 months ago Performance has returned to normal following emergency repairs to our host's network.
9 years, 4 months ago Our host is experiencing datacentre-wide connectivity problems, causing higher latencies to all of our backend machines. We will update here as we get more information.
9 years, 4 months ago We are currently experienced higher than usual API latencies due to an internal networking issue. We are working with our provider to resolve this asap.
9 years, 4 months ago TextRazor experienced increased latencies for a small number of requests today between 18:00 and 18:30. This was due to a hardware issue on one of our backend machines, which has since been removed from service.
9 years, 5 months ago Information Issue traced to an failure case triggered by a bad request, we're rolling out a fix as soon as possible to prevent future occurrences.
9 years, 5 months ago The TextRazor API responded with HTTP internal server errors for two minutes at approximately 22:00 GMT. We are investigating the root cause.
9 years, 6 months ago Network issues have now been resolved, latency has decreased to normal levels. We will continue to monitor the situation.
9 years, 6 months ago Increase latencies appear to be a result of networking issues - we're working with our provider to resolve the situation.
9 years, 6 months ago Investigating increased latencies for some requests.
9 years, 7 months ago At approximately 13:00 GMT the TextRazor API was intermittently returning a 503 Service Unavailable message to high volume users for up to 5 minutes. This was caused by a configuration issue with our frontend Amazon Elastic Load Balancer, which has now been resolved.
9 years, 11 months ago No Reported Issues
9 years, 11 months ago A new issue affecting SSLv3, known as POODLE or CVE-2014-3566, was recently discovered. As a precaution we have disabled support for SSLv3 on our outer SSL termination layer (provided by Amazon's Elastic Load Balancer). Since most recent client-side SSL libraries use the modern TLS protocol that is unaffected by this issue, we believe there will be minimal impact from this change. Please contact support@textrazor.com if you are experiencing any problems with SSL connections to https://api.textrazor.com.
10 years, 2 months ago TextRazor experienced up to a minute of partial downtime at 16:50 UTC as one of our frontend machines lost network connectivity, we are investigating the root cause.
10 years, 3 months ago Service has been restored after ~1 minute of downtime. The issue was triggered while handling some invalid user input, we will deploy a long term fix today.
10 years, 3 months ago TextRazor is unavailable for some users while one of our clusters is restarting following a system error. Service will be restored in the next minute or so.
10 years, 4 months ago No Reported Issues
10 years, 4 months ago TextRazor experienced approximately 30 seconds of partial downtime due to a routing issue. The issue has now been resolved.
11 years ago No Reported Issues

Status Legend

  • Service disruption

  • Informational message

  • Service is operating normally