• 1 Post
  • 8 Comments
Joined 6 months ago
cake
Cake day: August 23rd, 2024

help-circle

  • The issue isn’t Docker vs Podman vs k8s vs LXC vs others. They all use OCI images to create your container/pod/etc. This new limit impacts all containerization solutions, not just Docker. EDIT: removed LXC as it does not support OCI

    Instead, the issue is Docker Hub vs Quay vs GHCR vs others. It’s about where the OCI images are stored and pulled from. If the project maintainer hosts the OCI images on Docker Hub, then you will be impacted by this regardless of how you use the OCI images.

    Some options include:

    • For projects that do not store images on Docker Hub, continue using the images as normal
    • Become a paid Docker member to avoid this limit
    • When a project uses multiple container registries, use one that is not Docker Hub
    • For projects that have community or 3rd party maintained images on registries other than Docker Hub, use the community or 3rd party maintained images
    • For projects that are open source and/or have instructions on building OCI images, build the images locally and bypass the need for a container registry
    • For projects you control, store your images on other image registries instead of (or in addition to) Docker Hub
    • Use an image tag that is updated less frequently
    • Rotate the order of pulled images from Docker Hub so that each image has an opportunity to update
    • Pull images from Docker Hub less frequently
    • For images that are used by multiple users/machine under your supervision, create an image cache or image registry of images that will be used by your users/machines to mitigate the number of pulls from Docker Hub
    • Encourage project maintainers to store images on image registries other than Docker Hub (or at least provide additional options beyond Docker Hub)
    • Do not use OCI images and either use VM or bare metal installations
    • Use alternative software solutions that store images on registries other than Docker Hub

  • I’m already doing that, but just for one VIP. I think I just need to get the additional VIPs working.

    I know that I will need to update my local network’s DNS so that something like service#1 = git.ssh.local.domain and git.ssh.local.domain = 192.168.50.10 and service#2 = sftp.local.domain and sftp.local.domain = 192.168.50.20. I would setup 192.168.50.10 as the load balancer IP address to Forgejo’s SSH entrypoint and 192.168.50.20 as the load balancer IP address to the SFTP’s entrypoint. However, how would I handle requests/traffic received externally? The router/firewall would receive everything and can port forward port 22 to a single IP address, which would prevent one (or more) service from being used externally, correct?



  • I am unsure if I understood everything correctly, but I believe I am already doing everything that you mentioned. I followed the Kube-VIP’s ARP daemonset’s documentation. The leader election works. I am not using Kube-VIP for load balancing though. Instead, I am using Traefik, which is using the same IP address that was assigned to the control plane during both k3s’s and Kube-VIP’s setup. However, I am unable to get any additional VIP addresses to properly route to Traefik.

    Even if I did get the additional VIP addresses working, I think I still have one last issue to overcome. I can control the local network’s DNS so that service#1 is assigned VIP#1 and service#2 assigned VIP#2. However, how would this be handled for traffic received externally? If the external/public DNS has service#1 and service#2 assigned to the network’s public IP address, both service’s traffic would be received by the router/firewall on port 22. The router/firewall could forward traffic on port 22 to (presumably) a single IP address, which would only allow service#1 or service#2 (but not both) to receive traffic publicly, correct?