This one surprised me, too.
I had a nasty habit of waiting until the evening to do my papers in college, because that was when it was acceptable to have some wine or whiskey while I wrote. But it was amazing just how much easier it was to stay on task after having a drink, and during finals - or after college when i was on deadline - i would alternate between liters of coffee in the morning and several drinks in the evening.
Now that I'm medicated both coffee and alcohol are just occasional indulgences... well, alcohol is at least. But I didn't expect it to help curb my impulsive consumption habits like it has- it's been a game-changer.
Would be, but unfortunately all I have are fluorescent troffers down there. But a single extension and splitter cable might still be acceptable. I also thought about getting some usb battery banks - the cameras run off a 5v power adapter, I think a 15000mAh battery might last a couple days or even just one (not sure how many watts they draw running the custom firmware).
I was hoping for a cleaner solution but it might be one of those "pick two" situations.
Battery-powered wifi cameras with RTSP?
Over the weekend I set up some outdated wyze v3 cameras with hacked firmware to enable rtsp, and was able to load the stream into frigate to do some mouse-infestation detection. This worked great, and it was with hardware I already had laying around, but now i'm in need of some more coverage and I don't want extension cords hanging from my basement ceiling everywhere.
I thought there might be another ~$50 wifi battery camera somewhere out there that could be hacked or had native rtsp support, but my search is coming up short.... seems like either people settle for cloud-polling cheap ones or they splurge on some real quality mid-range ones. Anyone know of any cheap options?
For those curious, here's the git repo for the wyzecams i found. It's as easy as loading a micro-sd with the firmware, giving it an ssh key, and then turning it back on. Then you can ssh into it over the network and enable things like rtsp and a bunch of other features i don't know what to do with. It has proven to be handy, but it doesn't support the outdoor battery-powered models.
Well the word you used was should
Nor should they.
Errr, what now?
The stuff I’ve seen AI produce has sometimes been more wrong than anything a human could produce. And even if a human would produce it and post it on a forum, anyone with half a brain could respond with a correction.
Seems like the problem is that you're trying to use it for something it isn't good or consistent at. It's not a dictionary or encyclopedia, it's a language model that happens to have some information embedded. It's not built or designed to retrieve information from a knowledge bank, it's just there to deconstruct and reconstruct language.
When someone on a forum says they asked GPT and paste its response, I will at the very least point out the general unreliability of LLMs, if not criticise the response itself (very easy if I’m somewhat knowledgeable about the field in question)
Same deal. Absolutely chastise them for using it in that way, because it's not what it's good for. But it's a bit of a frequency bias to assume most people are using it in that way, because those people are the ones using it in the context of social media. Those who use it for more routine tasks aren't taking responses straight from the model and posting it on lemmy, they're using it for mundane things that aren't being shared.
Anyway my point was merely that people do regularly misuse LLMs, and it’s not at all difficult to make them produce crap. The stuff about who should be blamed for the whole situation is probably not something we disagree about too much.
People misuse it because they think they can ask it questions as if it's a person with real knowledge, or they are using it precisely for it's convincing bullshit abilities. That's why I said it's like laughing at a child for giving a wrong answer or convincing them of a falsehood merely from passive suggestion - the problem isn't that the kid is dumb, it's that you're (both yourself and the person using it) going in with the expectation that they are able to answer that question or distinguish fact from fiction at all.
I imagine both Libre and Free are open-sourced and easily modifiable? I haven't looked into it, but if it's anything like Rhino there should be a standard way of writing custom plugins that should close the gap on some of those - at least the object naming would be easy.
I'll look into them though, thanks! BIM software is such a pain in the ass to work with and one of the most expensive design software I know of, I think open sourced projects would be amazing for BIM if they took off like FreeCAD did
I work as an architectural designer but I've never really been allowed to use anything other than Revit for BIM workflows. Our consultants basically only use Revit or Autodesk products, so our hands are kind of tied for projects where we need to collaborate.
My boss uses Vectorworks for our small projects that don't need BIM, I might suggest we switch to Libre or FreeCAD so that we all have access without needed another VW license. Do you enjoy using LibreCAD?
Tips were first used as a way for rail lines to avoid having to pay black coach attendants a wage.
It isn't surprising that service workers don't want to abolish tips, since that's primarily how they get paid now - but that doesn't mean we shouldn't abolish them. The owners should have to pay their workers a living wage. By making that the consumer's responsibility, it frees the business owner from the responsibility of paying their workers for their labor.
Tip wages are exploitative, plain and simple.
Person A: Expresses a liberal perspective "Watch as this person calls me a Lib" Person B: "..... yes, that is common liberal perspective" Person A: "Called it"
The more we electrify our cars, the less feasible this is.
Decoding and sending messages to mechanical systems over the CANBUS is one thing (still difficult, but possible), but taking control over system software is another. In the us, consumers are supposed to have the right to repair their personal vehicles, but a lot of that law was established back when you could do work on a vehicle without having access to digitally protected copyright. We might have a right to repair, but that's starting to clash against their copyrights over their IP and software controls.
And that's not even getting into their eagerness to utilize subscription models - would a court side with a consumer if they decided they wanted to circumvent DRM controls over subscription-controlled car features (a car that they own outright)? It's unclear to me that right to repair or consumer protections have been written in a way to accommodate those conflicts.... Especially when cars are subject to far higher safety regulations than computers - a manufacturer could argue that they need to prevent consumers from tampering with their software systems for their own safety.
If you still own a 'dumb' car without one of these systems, it's really not a bad idea to hold onto them for as long as possible. You can always upgrade them if you want to - some people have even replaced ICE transmissions with electric ones. But once you own one of these cars with software-controlled systems, it's far harder to strip them out. Especially once they start requiring cellular connection to operate or function (or require connections to privately-owned satellite constellations.....)
Yea, given how new it is i probably wouldn't trust it for something that important. But in-theory it's meant to handle that type of embedded system
From what I'm hearing: yes
There will always be a class of programmers/people that choose not to interrogate or seek to understand information that is conveyed to them - that doesn't negate the value provided by tools like Stack Overflow or chatGPT, and I think OP was expressing that value.
Why do we expect a higher degree of trustworthiness from a novel LLM than we de from any given source or forum comment on the internet?
At what point do we stop hand-wringing over llms failing to meet some perceived level of accuracy and hold the people using it responsible for verifying the response themselves?
Theres a giant disclaimer on every one of these models that responses may contain errors or hallucinations, at this point I think it's fair to blame the user for ignoring those warnings and not the models for not meeting some arbitrary standard.
reading comprehension
Lmao, there should also be an automod rule for this phrase, too.
There’s a huge difference between a coworker saying [...]
Lol, you're still talking about it like it's a person that can be reasoned with bud. It's just a piece of software. If it doesn't give you the response you want you can try using a different prompt, just like if google doesn't find what you're looking for you can change your search terms.
If people are gullible enough to take its responses as given (or scold it for not being capable of rational thought lmao) then that's their problem - just like how people can take the first search result from google without scrutiny if they want to, too. There's nothing especially problematic about the existence of an AI chatbot that hasn't been addressed with the advent of every other information technology.
Except each of those drips are subject to the same system that preferences individualized transport
This is still a perfect example, because while you're nit-picking the personal habits of individuals who are a fraction of a fraction of the total contributors to GPT model usage, huge multi-billion dollar entities are implementing it into things that have no business using it and are representative for 90% of llm queries.
Similar for castigating people for owning ICE vehicles, who are not only uniquely pressued into their use but are also less than 10% of GHG emissions in the first place.
Stop wasting your time attacking individuals using the tech for help in their daily tasks, they aren't the problem.
… I wasn’t trying to trick it.
I was trying to use it.
Err, I'd describe your anecdote more as an attempt to reason with it...? If you were using google to search for an answer to something and it came up with the wrong thing, you wouldn't then complain back to it about it being wrong, you'd just try again with different terms or move on to something else. If 'using' it for you is scolding it as if it's an incompetent coworker, then maybe the problem isn't the tool but how you're trying to use it.
I wasn’t aware the purpose of this joke meme thread was to act as a policy workshop to determine an actionable media campaign
Lmao, it certainly isn't. Then again, had you been responding with any discernible humor of your own I might not have had reason to take your comment seriously.
And yes, I very intentionally used the phrase ‘understand how computers actually work’ to infantilize and demean corporate executives.
Except your original comment wasn't directed at corporate executives, it appears to be more of a personal review of the tool itself. Unless your boss was the one asking you to use Gemini? Either way, that phrase is used so much more often as self-aggrandizement and condescension that it's hard to see it as anything else, especially when it follows an anecdote of that person trying to reason with a piece of software lmao.
Then why are we talking about someone getting it to spew inaccuracies in order to prove a point, rather than the decision of marketing execs to proliferate its use for a million pointless implementations nobody wants at the expense of far higher energy usage?
Most people already know and understand that it's bad at most of what execs are trying to push it as, it's not a public-perception issue. We should be talking about how energy-expensive it is, and curbing its use on tasks where it isn't anything more than an annoying gimmick. At this point, it's not that people don't understand its limitations, it's that they don't understand how much energy it's costing and how it's being shoved into everything we use without our noticing.
Somebody hopping onto openAI or Gemini to get help with a specific topic or task isn't the problem. Why are we trading personal anecdotes about sporadic personal usage when the problem is systemic, not individualized?
people who actually understand how computers work
Bit idea for moderators: there should be a site or community-wide auto-mod rule that replaces this phrase with 'eat all their vegitables' or something that is equally un-serious and infantilizing as 'understand how computers work'.
The energy expenditure for GPT models is basically a per-token calculation. Having it generate a list of 3-4 token responses would barely be a blip compared to having it read and respond entire articles.
There might even be a case for certain tasks with a GPT model being more energy efficient than making multiple google searches for the same. Especially considering all the backend activity google tacks on for tracking users and serving ads, complaining about someone using a GPT model for something like generating a list of words is a little like a climate activist yelling at someone for taking their car to the grocery store while standing across the street from a coal-burning power plant.
The usefulness of Stack Overflow or a GPT model completely depends on who is using it and how.
It also depends on who or what is answering the question, and I can't tell you how many times someone new to SO has been scolded or castigated for needing/wanting help understanding something another user thinks is simple. For all of the faults of GPT models, at least they aren't outright abusive to novices trying to learn something new for themselves.
Obama tells Black men it’s ‘not acceptable’ to sit out election
He then ends up suggesting the reason they don't like Harris is because she's a woman -
“Because part of it makes me think – and I’m speaking to men directly – part of it makes me think that, well, you just aren’t feeling the idea of having a woman as president, and you’re coming up with other alternatives and other reasons for that.”
Macklemore dropped from Las Vegas festival lineup after viral 'f--- America' video
>News that the rapper was removed from the Neon City lineup comes after his performance at the Palestine Will Love Forever Festival in Seattle over the weekend. A video of Macklemore yelling "Yeah, f— America!" during his performance has since been viewed over a million times on social media.
>Macklemore has not kept his stance on the ongoing war in Gaza a secret. In May, he made headlines when he released "Hind's Hall," a rap single praising college students for their protests of the war and denouncing the U.S.'s role in the conflict.
Blinken told Congress, “We do not currently assess that the Israeli government is prohibiting or otherwise restricting” aid, even though the U.S. Agency for International Development and others had determined that Israel had broken the law.
>Blinken told Congress, “We do not currently assess that the Israeli government is prohibiting or otherwise restricting” aid, even though the U.S. Agency for International Development and others had determined that Israel had broken the law.
Rage Against The Machine - Bullet In The Head - 1993 - YouTube
YouTube Video
Click to view this content.
"You're not supposed to be so blind with patriotism that you can't face reality. Wrong is wrong, no matter who says it"
Edited for legibility
Docker Help: Port collisions when using container-networking
edit: a working solution is proposed by @Lifebandit666@feddit.uk below:
>So you’re trying to get 2 instances of qbt behind the same Gluetun vpn container?
>I don’t use Qbt but I certainly have done in the past. Am I correct in remembering that in the gui you can change the port?
>If so, maybe what you could do is set up your stack with 1 instance in, go into the GUI and change the port on the service to 8000 or 8081 or whatever.
>Map that port in your Gluetun config and leave the default port open for QBT, and add a second instance to the stack with a different name and addresses for the config files.
>Restart the stack and have 2 instances.
-----
Has anyone run into issues with docker port collisions when trying to run images behind a bridge network (i think I got those terms right?)?
I'm trying to run the arr stack behind a VPN container (gluetun for those familiar), and I would really like to duplicate a container image within the stack (e.g. a separate download client for different types of downloads). As soon as I set the network_mode to 'service' or 'container', i lose the ability to set the public/internal port of the service, which means any image that doesn't allow setting ports from an environment variable is stuck with whatever the default port is within the application.
Here's an example .yml:
``` services: gluetun: image: qmcgaw/gluetun:latest container_name: gluetun cap_add: - NET_ADMIN environment: - VPN_SERVICE_PROVIDER=mullvad - VPN_TYPE=[redacted] - WIREGUARD_PRIVATE_KEY=[redacted] - WIREGUARD_ADDRESSES=[redacted] - SERVER_COUNTRIES=[redacted] ports: - "8080:8080" #qbittorrent - "6881:6881" - "6881:6881/udp" - "9696:9696" # Prowlarr - "7878:7878" # Radar - "8686:8686" # Lidarr - "8989:8989" # Sonarr restart: always
qbittorrent: image: lscr.io/linuxserver/qbittorrent:latest container_name: "qbittorrent" network_mode: "service:gluetun" environment: - PUID=1000 - PGID=1000 - TZ=CST/CDT - WEBUI_PORT=8080 volumes: - /docker/appdata/qbittorrent:/config - /media/nas_share/data:/data) ``` Declaring ports in the qbittorrent service raises an error saying you cannot set ports when using the service network mode. Linuxserver.io has a WEBUI_PORT environment variable, but using it without also setting the service ports breaks it (their documentation says this is due to CSRF issues and port mapping, but then why even include it as a variable?)
The only workaround i can think of is doing a local build of the image that needs duplication to allow ports to be configured from the e variables, OR run duplicate gluetun containers for each client which seems dumb and not at all worthwhile.
Has anyone dealt with this before?
'but why are you only complaining about DEMOCRATS?'
It's educate, AGITATE, organize
edit: putting this at the top so people understand the basis for this:
>You may well ask: “Why direct action? Why sit ins, marches and so forth? Isn’t negotiation a better path?” You are quite right in calling for negotiation. Indeed, this is the very purpose of direct action. Nonviolent direct action seeks to create such a crisis and foster such a tension that a community which has constantly refused to negotiate is forced to confront the issue. It seeks so to dramatize the issue that it can no longer be ignored. My citing the creation of tension as part of the work of the nonviolent resister may sound rather shocking. But I must confess that I am not afraid of the word “tension.” I have earnestly opposed violent tension, but there is a type of constructive, nonviolent tension which is necessary for growth. Just as Socrates felt that it was necessary to create a tension in the mind so that individuals could rise from the bondage of myths and half truths to the unfettered realm of creative analysis and objective appraisal, so must we see the need for nonviolent gadflies to create the kind of tension in society that will help men rise from the dark depths of prejudice and racism to the majestic heights of understanding and brotherhood.