Rocket.chat: "Failed_To_upload_Import_File" for Slack Import

Created on 5 Feb 2019  路  8Comments  路  Source: RocketChat/Rocket.Chat

Description:

When attempting to import a .zip file with Slack history, an error message always pops up in the upper right hand corner saying ""Failed_To_upload_Import_File".
This error does not produce a log entry.
Regular file uploads work well.

Steps to reproduce:

Go to 'Administration' > 'Import' > 'Slack' > either 'Choose File' or use a 'File URL'
Both options result in the same error.

Expected behavior:

Slack history should be uploaded to the server.

Actual behavior:

An error message always pops up in the upper right hand corner saying ""Failed_To_upload_Import_File".

Screenshot

Server Setup Information:

  • Version of Rocket.Chat Server: Latest Docker image
  • Operating System: Debian Stretch
  • Deployment Method: Docker
  • Number of Running Instances: 1
  • DB Replicaset Oplog:
  • NodeJS Version: node:8.11-slim (from Rocket.Chat Docker image)
  • MongoDB Version: MongoDB shell version v4.0.5

Additional context

I am evaluating Rocket.Chat to use at our company instead of Slack and running into and issue when trying to import Slack history.
I have set up a self-hosted instance of Rocket.Chat on our server which runs Debian Stretch. Both Rocket.Chat and the MongoDB are running in Docker containers and everything is working smoothly. Registration, chat, file uploads (GridFS), encryption, backups, etc

POST method fails with 413 status Payload Too Large when trying to import a slack .zip file.
However, the file is less than 1MB and I have successfully tested uploads on 50+MB files.

Relevant logs:

Server does not produce a log entry when the error occurs.
Browser produces 413 status Payload Too Large.

Most helpful comment

@Tomasvrba

I use nginx as a reverse proxy and was able to resolve this issue by making sure that I included

client_max_body_size 0;

under the _http {_ section of the ngxin.conf file. It cannot be set to any strict amount, even high amounts in the GBs. Setting it to 0 tells http connections to have no limit, which ensures this error doesn't pop up.

All 8 comments

Question: Do your other file uploads work as expected or do they all produce a similar error with no logs?

Update: I saw in your question that regular file uploads work correctly

@Tomasvrba

I use nginx as a reverse proxy and was able to resolve this issue by making sure that I included

client_max_body_size 0;

under the _http {_ section of the ngxin.conf file. It cannot be set to any strict amount, even high amounts in the GBs. Setting it to 0 tells http connections to have no limit, which ensures this error doesn't pop up.

@Tomasvrba

Thanks a bunch. I added that line to the config file and it worked like a charm

@chirospasm Thanks for the tip. We ended up just switching to RC without importing the Slack history and at this point there probably isn't too much of a reason for doing it. I will test out your solution on a new instance though and close this issue since it seems to be working.

Hey, I got the same issue here. Unfortunately I have no access to the nginx.conf-file. So I access RocketChat directly via IP and Port, so the nginx-reverse proxy should have no influence.

But it's still the same, RocketChat throws a timeout (RocketChat-site tries to "connect...") while uploading the zip-file (8MB).

After a while it reconnects, showing all the users but only a the public channels in the import options.
Anyone an idea about that?
Messages says "66120", so it looks like it has parsed all the messages, but not all the channels.
(I have disabled all the rate Limiters)
The import of Slacks "standard"-export (only public channels and users) worked flawlessly, thats why we switched to the plus-plan to use the "coporate"-export (incl. private chats and closed channels). Itwould be a pity if we could not import these files now...

EDIT: Looks like there are 2 independent issues:

  1. RocketChat throws a connection timeout while uploading/processing a large slack-export-zip-file. It reconnects after the import is finished, but the "connecting..."-info in the top of the page is irritating. The progress which is logged in the importer.js (super.updateProgress(ProgressStep.PREPARING_CHANNELS) etc.) should be displayed in the frontend.
  2. currently the importer.js is only processing public slack-channels (channels.json), users (users.json) and the message-folders (all messages, not only the ones from the channels.json). I will open a feature request about also processing direct messages (dms.json), private channels (groups.json) and multi-user-conversations (mpims.json) from Slack-coporate-exports.

I am having the exact same issue. Running via nginx and body size to unlimited as well as going to the node process by ip and port yields the same error. The export .zip file is 3mb in size and uploading shows the error, after which I am silently returned to the standard slack upload page.

I have exactly the same issue. Setting reverse proxy does not help. Can someone share a detailed workaround?

Was this page helpful?
0 / 5 - 0 ratings