Rtnetlink Answers File Exists
Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.
Thanks for contributing an answer to Server Fault! Please be sure to answer the question. Provide details and share your research! But avoid Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great.
Sign upHave a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
commented Jun 13, 2017
ERX, 1.9.1.1 I tried execute config commands or edit config.boot then load commit, both error occurred. conf like: load this, commit I'm sure that wg0 is not configured, and to ensure it has gone, i deleted using ip link del, commit again and error again. I checked /opt/vyatta/share/vyatta-cfg/templates/interfaces/wireguard/node.def, and made changes: check before add then error: So /opt/vyatta/share/vyatta-cfg/templates/interfaces/wireguard/node.tag/address/node.def commit, again /opt/vyatta/share/vyatta-cfg/templates/interfaces/wireguard/node.def another check before ip route, and then commit passed. I tried to change ip address, but since address is defined as multi, the 'change' became 'add', and there were two address entries under wg0. then I delete both, and add a third ip address: |
commented Jun 14, 2017
I hit this one when setting Right now I can live without default route in the wg interface, but I can think of a lot of use cases where I would need to define |
commented Jun 14, 2017 • edited
edited
Same here. I explicitly needed to whitelist all IPs in the tunnel, as the tunnel is used to carry arbitrary IP traffic. I ended up editing the file /opt/vyatta/share/vyatta-cfg/templates/interfaces/wireguard/node.tag/address/node.def and removing the section I don't really understand the goal here - if I would want to route traffic through the tunnel I would either give the interface an IP or add a static route - but from a semantics viewpoint I certainly would not expect the allowed-ips setting to create routes... |
commented Jun 14, 2017
@julianschuh I don't understand neither. allowed-ip is an entry for ACL, not for routing, so I just muted it to commit than fixing it. |
commented Jun 14, 2017
@sskaje You are incorrect. For incoming encrypted packets, allowed-ips is an ACL. However, for unencrypted outgoing packets, once a packet is routed to the WireGuard interface, allowed-ips determines which one of the WireGuard peers it should be encrypted for. So in this latter sense, it is also a routing table, within the interface. It might be useful to have an option to disable adding routes based on allowed-ips. I doubt, though, that any of the things mentioned above would require that. For example, why are you trying to have two default routes in the first place? |
commented Jun 14, 2017
I think that a wg interface's allowed-ips and the kernel's routing table(s) are different things that only match in the most simple static setups. |
commented Jun 14, 2017
Okay, we're on the same page then. This indeed is a good reason why you might not want all the allowed-ips turned into routes. I'll prepare a patch for this. |
commented Jun 14, 2017
Alright, fixed. Download this latest release (I posted a |
commented Jun 14, 2017
Just did a quick upgrade, I can confirm it works. I just wonder where this route comes from: |
commented Jun 14, 2017
That route probably comes from the policy-based routing you have elsewhere. |
Rtnetlink Answers File Exists Openconnect
commented Jun 14, 2017 • edited
edited
Nope, no such thing in this ER-X's configuration & did a fresh reboot. I can try to reproduce it on a lab ER-Lite later. For reference, my corresponding routing config: |
commented Jun 14, 2017
edit config.boot, change peer pubkey, same error latest release. |
commented Jun 14, 2017
(deleted a comment that didn't make any sense) |
commented Jun 14, 2017
@markuslindenberg |
Ok. So I don't have all the answers yet, but here are some findings so far.
I've tried three different cases, and all the configs and logs plus iptables routes are pasted below.
Case 1:
Standard setup, original image, using LOCAL_NETWORK variable to try to make Transmission accessible on my local LAN.
Case 2:
Ditching the LOCAL_NETWORK variable
Case 3:
Same setup as case 2, but I removed the three route commands in the ovpn file.
For reference, the three consecutive lines starting with this one
Rtnetlink Answers File Exists Debian 9
Result:
In the first case we see two RTNETLINK errors. In the beginning RTNETLINK answers: Invalid argument
and then at the end RTNETLINK answers: File exists
. Transmission is not available on host:9091, the request just hangs. This is typical behaviour when the local network route is not added, also described in the README. As I also fired up the proxy container I tested connecting through that instead, and it works. So Transmission is running and using http://ipmagnet.services.cbcdn.com/ I could verify that the traffic was routed through the VPN.
But anyways, I figured that the first RTNETLINK error could be the LOCAL_NETWORK route. So I removed that from the config.
In the second case, without the local network settings, as expected the first error is gone.
In the third case, where I also removed the route options from the .ovpn file, the second error is also gone.
In all three cases Transmission is running and the IP check says that the traffic is routed through VPN. The route stuff probably need more investigation to know what's needed and why PureVPN is different from others when it comes to the LOCAL_NETWORK variable and have the route options in their configs.
Later I'll try to compare with PIA that I'm using, but this is as far as I came today. Maybe someone can do some more digging as well?
Config for Case 1:
docker-compose.yml:
Logs:
Output from ip r
:
Case 2:
docker-compose.yml:
Same as for Case 1, but without LOCAL_NETWORK line
Logs:
Output from ip r
:
Config for Case 3:
docker-compose.yml:
Same as for Case 2. Note that the image is changed as described.
Logs:
Output from ip r
: