Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)RH
Posts 4
Comments 70
Python in Excel – Available Now
  • That's true, but how often have you heard a finance team member wanting a CSV file so they can more easily process the data using Pandas or visualize it with MatPlotLib? How many accountants or finance people (especially those that ask for everything in Excel) do you know that is comfortable writing even a single line of Python code? How many of the finance team's Excel-based tools will Python integrate well with? What feature(s) does Python within Excel provide that Excel (formulas, pivot tables, VBA, Power Query, Power Pivot, etc.) does not provide that someone on the finance team would need? What advanced charting/dashboarding functionality does Python in Excel provide that isn't better accomplished in PowerBI (if not handled by standard Excel charts/graphs)?

    Don't get me wrong - Microsoft's implementation of Python in Excel has its merits and will solve some problems that otherwise would not be possible in Excel and will make some people happy. However, this is not the solution most people were expecting, asking for, or find useful.

  • Python in Excel – Available Now
  • I agree with everything you said, but (in Microsoft's eyes) this is a feature - not a bug.

    Without this cloud component, how could:

    • Microsoft make sure that the accounting team does not introduce a malicious/old Python library into the Excel file?
    • Microsoft protect its users from writing/running inefficient, buggy, or malicious Python code?
    • Microsoft provide a Python runtime to users who do not know how to install Python?
    • Microsoft charge to run code that you wrote in a free, open source software programming language on a device that you own?
  • Python in Excel – Available Now
  • Over a year later and I still do not understand what the use case for this is.

    A lot of the examples/documentation that was made by Microsoft for this seems to focus on data analysis and data visualization. Anyone in those fields would probably prefer to get the data out of Excel and into their tool/pipeline of choice instead of running their Python code in Excel. That also makes the big assumption that the data being used is fully contained within the Excel file and that the libraries used within the code are avalaible in Excel (including the library version).

    For anyone looking to learn/use Excel better, I doubt the best use of their time is learning a new programming language and how Excel implements that programming language. They would likely be better off learning Excel's formulas, pivot tables, charts, etc. They could even learn Power Query to take things to another level.

    For anyone looking to learn Python, this is absolutely a terrible way to do so. For example, it abstracts away library maintenance, could provide modified error messages, and makes the developer feedback loop more complicated.

    If you want to automate Excel then this realistically allows for very little new functionality that did not exist prior to this feature. Using other Python libraries like OpenPyxl and xlWings will still be required to automate Excel.

    I am sure there are edge cases where this iteration of Python in Excel is perfect. However, this feels like a checkbox filler ("yeah, Excel supports Python now") than an implementation of an actual useful feature. A fully featured and supported Python library that manipulates Excel/Excel files would have been a much more exciting and useful feature - even if it had to be executed outside of Excel, like OpenPyxl.

  • Plan Less, Do More: Introducing Appointment By Thunderbird
  • This is awesome! I wonder how it will compare to the competition when it's generally available.

    Does anyone have any information about the other offerings mentioned in the blog post?

    In the future, we intend for Appointment to be part of a wider suite of helpful products enhancing the core Thunderbird experience. Our ambition is to provide you with not only a first-rate email application but a hub of productivity tools to make your days more efficient and stress-free.

  • live location sharing?
  • This is definitely the wrong answer for this community, but may be an acceptable answer for this post. I have never used it nor would I ever recommend using it, but the conversations I have had with others who do use it make it seem like the service is far better than any alternative. Given the OP's requirements and willingness to both pay and sacrifice privacy, it seems like this may be appropriate for OP.

    I would still explore other options though. There are several competitors to Life360 and presumably there are some with better privacy policies (even if the service would not typically be recommended on this community). Maybe OP could use a service like https://tosdr.org or https://tldrlegal.com to better evaluate those options that would likely not get much attention on this community.

    Depending on the required features, maybe the Live Location Sharing feature of chat apps like Element may be sufficient. It could also help improve the privacy of the users' by switching to a more private/secure messaging app in the process.

  • HACS 2.0 - The best way to share community-made projects just got better - Home Assistant
  • The improvements sound great.

    I did not look through the details, but it's strange that one of the features is that Cloudflare R2 will be used to improve download speeds and reduce API calls to Github while at the same time adding a new requirement of adding a personal Github API token.

    Hopefully one day the Github requirement will be removed. It would be nice if projects/code stored on Gitlab, Codeberg, or other Git services like Gitea or Forgejo could be used without having to mirror/fork the project onto Github.

  • Identity provider privacy
  • In terms of privacy, you are giving your identity provider insight to each of the third party services that you use. It may seem that there isn't too much of a difference between using Google's SSO vs using your Gmail address to register your third party account. However, one big distinction is that Google would be able to see often and when you use each of your third party services.

    Also, it may be impossible to restrict the sharing of certain information from your identity provider with the third party service. For example, maybe you don't want to share a picture of yourself with a service, but that service uses user profile pictures or avatars. That service may ask (and require) that you give it access to your Google account's profile picture in order to authenticate using Google's SSO. You may be able to overwrite that picture, but you also may not be able to revoke the service's ability to retrieve it. If you used a "regular" local account, that Google profile picture would never be shared with the third party service if you did not upload it directly. The same is true for other information like email, first/last/full name, birthday, etc.

    There are other security and operational concerns with using SSO options. With the variety of password managers available, introduction of passkeys, and increased adoption of multi-factor authentication, many of the security benefits associated with SSO aren't as prevalent as they were 10 years ago. The biggest benefit is likely the convenience that SSO still brings compared to other authentication methods.

    Ultimately it's up to you to determine if these concerns are worth the benefits of using SSO (or the third party service provider at all if they require SSO). I have a feeling the common advise will be to avoid SSO unless its an identity provider that you trust (or even better - one that you host yourself) - especially if you're using unique emails/usernames along with strong and unique passwords with multi-factor authentication and/or passkeys.

  • Routing all traffic through Mullvad?
  • There are a few performance issues that you may experience. For example, if you're into online gaming then your latency will likely increase. Your internet connection bandwidth could also be limited by either Mullvad's servers, your router, or any of the additional hops necessary due to the VPN. There's also the situation where you have no internet connection at all due to an issue with the VPN connection.

    There are also some user experience issues that users on the network nay experience. For example, any location based services based on IP address will either not work at all or require manual updates by the user. The same is true for other settings like locale, but they are hopefully better handled via browser/system settings. What's more likely is content restrictions due to geographic IP addresses. Additionally, some accounts/activity could be flagged as suspicious, suspended, or blocked/deleted if you change servers too frequently.

    I'm sure you are either aware of or thought through most of that, but you may want to make sure everyone on the network is fine with that too.

    In terms of privacy and security, it really comes down to your threat model. For example, if you're logged into Facebook, Google, etc. 24/7, use Chrome, Windows, etc., and never change the outbound Mullvad server, you're not doing too much more than removing your ISP's ability to log your activity (and maybe that's all you want/need).

  • Findroid v0.15.0 is now available for update
  • I think there may be an issue where F-Droid is not properly recognizing the 64-bit version of Findroid. Maybe Droid-ify and/or the version of Android you are using won't allow 32-bit apps to be installed.

  • Findroid v0.15.0 is now available for update
  • Just to clarify - this is just an update that (I believe) is only available on IzzyOnDroid's F-Droid Repo, which previously had prior Findroid versions available. This new v0.15.0 is not available on the main F-Droid Repo.

    Is anyone only able to download the 32-bit version of this app via F-Droid? It looks like a 64-bit version has been made available starting with v0.3.0 and is also available on this new version.

  • Car Privacy is Shit
  • Really not sure why you got down voted so hard and it's a shame your comment was deleted. Your comment was relevant, accurate, and focused on an issue that others aren't talking about in here (and apparently don't want to). You were also the only person in this thread who provided any sources.

    I'm not sure what argument can be made against what you said. Just because a piece of information "is public" doesn't mean everyone wants that public information collected and shared with little (if any) control/input by you. If that were the case, doxxing wouldn't be an issue.

  • Car Privacy is Shit
  • I did not watch the mentioned video so I am not sure if what I am about to mention is discussed there or not. Also, sorry for the really long reply!

    I am not aware of any available truly privacy respecting, modern cars. However, assuming theat you obtain one or you can do things like physically disconnect/remove all wireless connectivity from the car to make it as private/secure as possible, there still is little you can do to be truly anonymous.

    Your car likely has a VIN and license plate as well as a vehicle registration. Assuming you legally obtained the vehicle and did not take any preventative measures prior to purchasing the car, those pieces of information will be tied back to you and your home address (or at least someone closely connected to you). You would need to initially obtain the vehicle via a compsy/LLC/partnership/etc. as the owner/renter/leasee of the vehicle and an address not associated to you. Additionally, you would need to find some means of avoiding or limiting the additional information connected to you that is likely required to obtain the vehicle like car insurance and your drivers license.

    Additionally, any work that certain mechanics perform may be shared (either directly or indirectly) with data brokers - even just routine maintenance like an oil change or alignment. Hopefully you didn't use your credit card, loyalty rewards program, etc. when you had any work done!

    There is also CCTV, security cameras, and other video recorders that are nearly impossible to avoid. Given enough time/resources and maybe a little bit of information, your car could be tracked from its origin to destination locations. This location history can be used to identify you as the owner (or at least driver/passenger) of the car. Unless your car never leaves your garage, you can almost guarantee that your car is on some Ring camera, street camera, etc.

    Furthermore, anything special or different about your car (custom decal, unusual window tinting, funny bumper sticker, uncommon color for the car, uncommon trim/package for the car, dented bumper, fancy rims, replaced tires, specific location of toll reader placement on the windshield, something hanging from your rear mirror, etc.) all help identify your car. The make/model and year of your car can also be used to identify your car if its not a common car in the area. These identifiers can be used to help track your car via the video feeds mentioned above.

    Then there are license plate readers which are only slightly easier to avoid than the video recordings. Permanent, stationary license plate readers can be found on various public roads and parking lots. There are also people who drive around with license plate readers as part of their job for insurance/repossession purposes. You may be able to use some sort of cover over your license plate(s) to hinder the ability of license plate readers to capture your plate number, but that could be used to help identify your car in video feeds/recordings.

  • How is instagram spying on me?
  • Its really hard to tell from a technical perspective, especially without having closely monitored all of your digital activity (and those that you have been in close contact with) in the days/weeks leading up receiving the ads. Some things that Meta could have done (in varying degrees of realism) include:

    • read anything you downloaded from your Matrix client, like file attachments
    • read your notifications if they contain any contents of the conversation
    • read your clipboard if you copy/pasted anything into/out of a Matrix client
    • actively participating in the room and associated your Matrix ID to your Meta account(s)
    • scraped the contents of the room if it is public and unencrypted
    • others in the Matrix room saved your Matrix ID in your contact information within their contacts
    • Meta is recording your screen outside of Meta's apps
    • a Meta library is used in another app/service on your device that is sharing information back to Meta
    • read an attachment that you downloaded elsewhere then shared on Matrix
    • Meta read screenshots you or others took of the conversation
    • Meta has a back door in the Matrix server or client software used
    • the administrators of your Matrix home server (or the administrors of any other home server in the room) are sharing non-encrypted information to Meta to offset hosting costs
    • Meta is running a home server of a user in the room
    • you or someone you are associated with clicked on a link shared in the Matrix room that contained a tracker or led to a site that contained a tracker

    Its really hard to comprehensively and conclusively avoid all "spying" that Meta/Instagram could do to you. The best thing that you could do is something that many people aren't capable or willing to do - not install any Meta software, don't use any Meta services, block any Meta IP addresses and/or domain names, and advocate that those around you do the same.

    Realistically, the best advice that youre going to get has already been said. Use the web browser instead of the app as much as possible, ideally in a different browser and/or user profile. If you must have the app installed, keep it in a separate profile and kill the app and/or profile whenever it is not in use. Review all of your security and privacy settings in all Meta apps. Review any apps/services you allowed Meta to connect to/from (and the security/privacy settings of those apps). Reduce the amount of information that you enter/share on Meta platforms. Review the other users that you are connected with on Meta's platforms.

  • NumPy 2.0.0 released
  • I did not know about autolinks - thanks for the link!

    It is interesting how different parsers handle this exact situation. I usually am cautious about it because I typically am not sure how it will be handled if I am not explicit with the URL and additional text.

  • NumPy 2.0.0 released
  • I'm curious about this. The source text of your comment appears that your comment was just the URL with no markdown. For your comment about a markdown parsing bug to be true, shouldn't the URL have been written in markdown with []() notation (or a space between the URL and the period) since a period is a valid URL character? For example, instead of typing https://google.github.io/styleguide/cppguide.html., should [https://google.github.io/styleguide/cppguide.html.](https://google.github.io/styleguide/cppguide.html) have been typed?

  • Automated CI/CD Data Snapshots
  • Yes, I am using PersistentVolumes. I have played around with different tools that have backup/snapshot abilities, but I haven't seen a way to integrate that functionality with a CD tool. I'm sure if I spent enough time working through things, I may be able to put together something that allows the CD tool to take a snapshot. However, I think that having it handle rollbacks would be a bit too much for me to handle without assistance.

  • Automated CI/CD Data Snapshots
  • Thanks for the reply! I am currently looking to do this for a Kubernetes cluster running various services to more reliably (and frequently) perform upgrades with automated rollbacks when necessary. At some point in the future, it may include services I am developing, but at the moment that is not the intended use case.

    I am not currently familiar enough with the CI/CD pipeline (currently Renovatebot and ArgoCD) to reliably accomplish automated rollbacks, but I believe I can get everything working with the exception of rolling back a data backup (especially for upgrades that contain backwards incompatible database changes). In terms of storage, I am open to using various selfhosted services/platforms even if it means drastically changing the setup (eg - moving from TrueNAS to Longhorn, moving from Ceph to Proxmox, etc.) if it means I can accomplish this without a noticeable performance degradation to any of the services.

    I understand that it can be challenging (or maybe impossible) to reliably generate backups while the services are running. I also understand that the best way to do this for databases would be to stop the service and perform a database dump. However, I'm not too concerned with losing <10 seconds of data (or however long the backup jobs take) if the backups can be performed in a way that does not result in corrupted data. Realistically, the most common use cases for the rollbacks would be invalid Kubernetes resources/application configuration as a result of the upgrade or the removal/change of a feature that I depend on.

  • Automated CI/CD Data Snapshots

    cross-posted from: https://lemmy.ml/post/16693054

    > Is there a feature in a CI/CD pipeline that creates a snapshot or backup of a service's data prior to running a deployment? The steps of a ideal workflow that I am searching for are similar to: > > 1. CI tool identifies new version of service and creates a pull request > 1. Manually merge pull request > 1. CD tool identifies changes to Git repo > 1. CD tool creates data snapshot and/or data backup > 1. CD tool deploys update > 1. Issue with deployment identified that requires rollback > 1. Git repo reverted to prior commit and/or Git repo manually modified to prior version of service > 1. CD tool identifies the rolled back version > 1. (OPTIONAL) CD tool creates data snapshot and/or data backup > 1. CD tool reverts to snapshot taken prior to upgrade > 1. CD tool deploys service to prior version per the Git repo > 1. (OPTIONAL) CD tool prunes data snapshot and/or data backup based on provided parameters (eg - delete snapshots after _ days, only keep 3 most recently deployed snapshots, only keep snapshots for major version releases, only keep one snapshot for each latest major, minor, and patch version, etc.)

    4

    Automated CI/CD Data Snapshots

    Is there a feature in a CI/CD pipeline that creates a snapshot or backup of a service's data prior to running a deployment? The steps of a ideal workflow that I am searching for are similar to:

    1. CI tool identifies new version of service and creates a pull request
    2. Manually merge pull request
    3. CD tool identifies changes to Git repo
      1. CD tool creates data snapshot and/or data backup
      2. CD tool deploys update
    4. Issue with deployment identified that requires rollback
      1. Git repo reverted to prior commit and/or Git repo manually modified to prior version of service
      2. CD tool identifies the rolled back version
        1. (OPTIONAL) CD tool creates data snapshot and/or data backup
        2. CD tool reverts to snapshot taken prior to upgrade
        3. CD tool deploys service to prior version per the Git repo
    5. (OPTIONAL) CD tool prunes data snapshot and/or data backup based on provided parameters (eg - delete snapshots after _ days, only keep 3 most recently deployed snapshots, only keep snapshots for major version releases, only keep one snapshot for each latest major, minor, and patch version, etc.)
    0
    Hosting a public wishlist
  • There are several proprietary options (many/most of which you cannot host). Looking for Amazon Wishlist alternatives should help in putting together a list of potential options. Some additional projects which are open source and selfhostable that you could also start with include:

  • homelab @lemmy.ml rhymepurple @lemmy.ml

    Automated Container Image Updates

    I'm trying to find a video that demonstrated automated container image updates for Kubernetes, similar to Watchtower for Docker. I believe the video was by @geerlingguy@mastodon.social but I can't seem to find it. The closest functionality that I can find to what I recall from the video is k8s-digester. Some key features that were discussed include:

    • Automatically update tagged version number (eg - Image:v1.1.0 -> Image:v1.2.0)
    • Automatically update image based on tagged image's digest for tags like "latest" or "stable"
    • Track container updates through modified configuration files
      • Ability to manage deploying updates through Git workflows to prevent unwanted updates
    • Minimal (if any) downtime
    • This may not have been in the video, but I believe it also discussed managing backups and rollback functionality as part of the upgrade process

    While this tool may be used in a CI/CD pipeline, its not limited exclusively to Git repositories as it could be used to monitor container registries from various people or organizations. The tool/process may have also incorporated Ansible.

    If you don't know which video I'm referring to, do you have any suggestions on how to achieve this functionality?

    EDIT: For anyone stumbling on this thread, the video was Meet Renovate - Your Update Automation Bot for Kubernetes and More! by @technotim@mastodon.social, which discusses the Kubernetes tool Renovate.

    4

    Multiple Librewolf Instances

    I've been looking for something "official" from the Librewolf team regarding running Librewolf in Docker, but I haven't found much. There are a few initiatives that seem to support Librewolf Docker containers (eg Github, Docker Hub), but they don't seem to be referenced much nor heavily used. However, maybe the reason I don't see it much is that there are better ways to achieve what I'm looking for.

    • Better separation from daily OS environment and regular browsing environment
    • Ability to run multiple instances privacy friendly browser and isolate each instance for particular use cases
    • Configure each instance to be run over different VPNs (or no VPN at all)

    Is there a way to best achieve this?

    1