-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document the security model of VSCode Remote Development #6608
Comments
I agree with the need for improved security documentation. Another example: dev containers can act as a sandbox to protect against certain supply chain attacks like the recent incident with the So, it would be helpful to know VS Code's security design goals as far as using dev containers to isolate code from the host. If any VS Code update could include some change that breaks isolation, then there's not much point in a user trying to lock it down. But this is something that the VS Code documentation should make clear, rather than letting each user try to figure out the team's intentions. @zzh1996 I'm not an expert, but skimming some of the documentation -- Remote Development FAQ, Supporting Remote Development, Extension Host -- it looks like an extension could tell VS Code to run it on the local machine instead of on the server, and then malicious code on the server could modify |
As a specific example, it looks like if you have a remote situation like:
Remote-Containers will copy your public key information from your Windows local into the container: Perhaps there's a bug involved, I'm not sure. In any case, this isn't quite "arbitrary code" but it's pretty dangerous, considering that a public key database can be extremely sensitive and here VS Code is copying it into an environment I've double-sandboxed in a container on a remote (VM) host. I'd be interested in hearing what the security thinking ("model") is behind this behavior. Actually: I suspect remote development was originally designed with the idea of developers working on relatively untrusted workstations reaching out to trusted internal networks. In that case, the remote system would be at least as trustworthy as the local system, so the remote system causing something to run on the local system would not really be a security concern. |
Thank you for these good questions! For VS Code remote, the VS Code server is in the same trust boundary as the VS Code client. That means that you should only connect to VS Code servers that you trust. So you should only connect to SSH machines that you trust and only create dev containers from definitions that you trust (i.e. you should not use dev containers as a sandbox). For Remote Containers, we handle trust using VS Code's UI, for example we create dev containers only from definitions which come from a trusted folder, we prompt and ask for trust before creating a dev container from a git repository URL, etc. For Remote SSH, we include the following notice in the extension README: |
@alexdima Thanks, however I don't think that warning is clear enough to cover the |
I agree. That's why we have this (very related) item on our roadmap -- https://github.com/microsoft/vscode/wiki/Roadmap#security Doing parts of this is a prerequisite in order to allow supporting connecting to untrusted servers. |
@alexdima I'm not sure that covers it. For example Also the GPG situation I've discussed isn't about sandboxing. (In my previous comment, by "sandboxing VS Code" I meant running the entire graphical application in a VM.) The Remote extensions are doing what they were intended to do. If some desktop VM software "helpfully" mapped the local machine's entire home directory into a VM without telling the user, we wouldn't say that's a sandboxing failure. VS Code does something similar with injecting git credentials, as was discussed in #5500 from @markomitranic and in other issues for over two years. Maybe it's not worth using this GPG situation as an example any more without more information about why the equivalent git options remain undocumented. |
@alexdima In #6391 @Tyriar wrote "Microsoft/our team encourages the use of containers which enhance security and ease of setup." You wrote, above, "you should not use dev containers as a sandbox". In #6391 (comment) @Tyriar suggests these statements don't conflict, but I'm confused. Could you please explain? |
@jeremyn i said they enhance security, not provide "absolute security". |
@Tyriar Well yeah, very little provides absolute security. Even classic examples of security theater technically enhance security in some minor way. To be clear I'm not trying to play word games or gotchas. The key issue is whether Microsoft recommends using containers to provide isolation or not. @alexdima says they don't, you say they do. Recall the inspiration here is VS Code Remote-* injecting sensitive info such as the GPG public key database into a container by default and without telling the user. If desktop VM software (VMware, VirtualBox) did this it would be completely ridiculous, not just a bug but a catastrophic design flaw. This is because these products are intended to provide airtight seals between the VM and host (and other VMs) and quietly copying sensitive data into a VM violates the very intention of these products. So the question is what is the intention, or assumption, behind the VS Code Remote-* extensions:
Currently it seems there's a dangerous situation where maybe one team is writing code as if there is no local/remote trust boundary, and another team is recommending remote environments because they assume a local/remote trust boundary provides security. The fact that a local/remote trust boundary enhances security in (only) some ways is not so important, because malicious authors can trivially target their malware at the exposed areas. So this tension should be resolved somehow, either internally in the VS Code team, or by clarifying here if there is just some mixed messaging. |
@jeremyn I very intentionally wrote enhance, and I specifically was talking about containers the technology wrt security. For one Codespaces wouldn't be possible at least in its current form without containers, plus of course they enhance security and ease of setup regardless of how you connect to them as a malicious actor would need to have a more targeted attack to escape the container's sandbox. By "Microsoft encourages" I mean we are building the Codespaces product and have built and talked about the Remote - Containers extension, I believe it's recommended in VS Code if you have a Dockerfile for example. They're obviously being built to be used by people. None of that contradicts what @alexdima said, but if there is any doubt you should go with what @alexdima said since as I said before this isn't really my area and have been trying to eject myself from this discussion because of that. |
@Tyriar Not to dig at you specifically, but just for the record, as it were, with this git/GPG/SSH injection thing, we're not talking about some hypothetical 0-day container escape kernel vulnerability malware here. We're talking run of the mill code doing something bad with a user's git/GPG/SSH stuff that has been unintentionally injected into a container. I'm being deliberately vague but you can imagine what I mean. Anyway I'm getting skeptical that this conversation, meaning the entire issue, is going to go anywhere. Is there any appetite on the VS Code team to write documentation like "our extension system has these security weaknesses, our Remote-* extensions intentionally weaken your security in such and such ways, here's what you can do to mitigate that"? Probably not. |
@jeremyn Sorry for being late to the discussion, but we only open a container on a folder after the user has confirmed trusting the folder. Only then do we setup Git/GPG/SSH forwarding. How is the dialog asking you if you trust the folder unclear about the security provisions? |
@chrmarti Any environment that pulls in dependencies from a public source is effectively untrustable, yet must be trusted in VS Code if you want to actually do anything in it. See #6608 (comment) where I pointed out that malware could have been installed in a Vue devcontainer recommended by Microsoft. The only serious defense is to limit sensitive access in the environment. |
@Tyriar @alexdima Apart from copying data to remotes/containers, I am extremely curious as to how a remote/container can execute code on the system running the GUI. Do you refer to risks associated with vulnerabilities in the protocol between client (GUI) and server? Or there is a well-defined way the server can instruct the GUI to execute code that can be exploited by a malicious actor on the server machine? Can running code on the client happen without an extension installed on it and just extensions installed on the remote? I agree that the note in the Remote-SSH is technically defining it, but that is just a note and intuition is that if you run the workspace remotely, it should be contained there. Moreover, a comment about workspace trust says it is related to code running where the workspace is: Being a developer is rewarding, but it's also a risky business. To contribute to a project, you inherently need to trust its authors because activities such as running npm install or make, building a Java or C# project, automated testing, or debugging, all mean that code from the project is executing on your computer. source: https://code.visualstudio.com/blogs/2021/07/06/workspace-trust I’ve been assuming remotes are somewhat isolated, obviously being incorrect due to not reading closely. However, I think fully understanding it requires more than just the note. It is very very easy to miss! And a precise explanation of the risks would be very welcome, because for some people leaking data to the remote might be OK, but remote code execution on the client can be unacceptable. I would greatly appreciate feedback on the remote code execution part. Thank you! |
Does the recently enabled electron sandbox (Insider channel) makes any difference to the security (or insecurity) of the host in the malicious remote contexts? |
The VS Code team wrote a blog post yesterday describing the new sandbox functionality. I admit I only skimmed it but the idea seems to be to move arbitrary Node activity into a local Electron sandbox. They give an example here where formerly you could write to arbitrary files but now you have to make that request through a special It looks like a substantial improvement for security for all of VS Code, including remote development, which is great. However it's unclear if there are any gaps a mildly determined attacker could exploit, or in other words whether this adds ironclad protection or whether instead it just makes it a lot harder for non-malicious code to accidentally cause problems. |
FYI there's a new "remote tunnel" functionality described in this blog post where basically you can edit your code in some remote location without an SSH connection. It appears to open a "tunnel" from the remote location to https://vscode.dev where you can access it. So this seems like a step up as far as sandboxing goes, though on the other hand your remote system is now extremely accessible to anyone with the global, guessable URL for the tunnel and your GitHub credentials or an active GitHub login. |
@zzh1996 @alexdima I don't think we're going to get a real resolution on this ticket. VS Code development is too rapid, and for various reasons I think it's very unlikely Microsoft/the VS Code team will create a security guide and keep it updated. Perhaps if someone from the VS Code team would provide a few links to any official channels where VS Code security topics are discussed, we could consider this issue good enough to close? |
We have 20 upvotes approving a ticket microsoft/vscode#180233:
|
There is currently no documentation of the security model of VSCode Remote Development.
If the remote server is fully controlled by an attacker, is it possible for him/her to run arbitrary code on my local machine? Is there any PoC for this?
If the answer is yes, does Restricted Mode solve this problem? In my understanding, Restricted Mode only stops attacks from the project folder, while the attacker could also manipulate the vscode server.
The text was updated successfully, but these errors were encountered: