How many of you are using ChatGpt to help you with your work, and not telling your boss/co-workers?
Just out of curiosity. I have no moral stance on it, if a tool works for you I'm definitely not judging anyone for using it. Do whatever you can to get your work done!
High school history teacher here. It’s changed how I do assessments. I’ve used it to rewrite all of the multiple choice/short answer assessments that I do. Being able to quickly create different versions of an assessment has helped me limit instances of cheating, but also to quickly create modified versions for students who require that (due to IEPs or whatever).
The cool thing that I’ve been using it for is to create different types of assessments that I simply didn’t have the time or resources to create myself. For instance, I’ll have it generate a writing passage making a historical argument, but I’ll have AI make the argument inaccurate or incorrectly use evidence, etc. The students have to refute, support, or modify the passage.
Due to the risk of inaccuracies and hallucination I always 100% verify any AI generated piece that I use in class. But it’s been a game changer for me in education.
I should also add that I fully inform students and administrators that I’m using AI. Whenever I use an assessment that is created with AI I indicate with a little “Created with ChatGPT” tag. As a history teacher I’m a big believer in citing sources :)
Is it fair to give different students different wordings of the same questions? If one wording is more confusing than another could it impact their grade?
I had professors do different wordings for questions throughout college, I never encountered a professor or TA that wouldn’t clarify if asked, and, generally, the amount of confusing questions evened out across all of the versions, especially over a semester. They usually aren’t doing it to trick students, they just want to make it harder for one student to look at someone else’s test.
There is a risk of it negatively impacting students, but encouraging students to ask for clarification helps a ton.
Sure it could but the same issue is present with one question. Some students will get the wording or find it easy others may not. Having a test in groups to limit cheating is very common and never led to any problems as far as my anecdotal evidence goes.
I'm a special education teacher and today I was tasked with writing a baseline assessment for the use of an iPad. Was expecting it to take all day. I tried starting with ChatGPT and it spat out a pretty good one. I added to it and edited it to make it more appropriate for our students, and put it in our standard format, and now I'm done, about an hour after I started.
I did lose 10 minutes to walking round the deserted college (most teachers are gone for the holidays) trying to find someone to share my joy with.
I don't have any bosses, but as a consultant, I use it a lot. Still gotta charge for the years of experience it takes to understand the output and tweak things, not the hours it takes to do the work.
Basically this. Knowing the right questions and context to get an output and then translating that into actionable code in a production environment is what I'm being paid to do. Whether copilot or GPT helps reach a conclusion or not doesn't matter. I'm paid for results.
A junior team member sent me an AI-generated sick note a few weeks ago. It was many, many neat and equally-sized paragraphs of badly written excuses. I would have accepted "I can't come in to work today because I feel unwell" but now I can't take this person quite so seriously any more.
I dunno, I'd consider it a moral failing on the part of the person who couldn't be honest and direct, even if there's a cultural issue in the workplace.
Exactly, if they're too lazy to write a fake sick note then they're certainly too lazy to work, either send them in for remediation or terminate them, either way they shouldn't be in the workplace
I had a coworker come to me with an "issue" he learned about. It was wrong and it wasn't really an issue and the it came out that he got it from ChatGPT and didn't really know what he was talking about, nor could he cite an actual source.
I've also played around with it and it's given me straight up wrong answers. I don't think it's really worth it.
I concur. ChatGPT is, in fact, not an AI; rather, it operates as a predictive text tool. This is the reason behind the numerous errors it tends to generate and its lack of self-review prior to generating responses is clearest indication of it not being an AI. You can identify instances where CHATGPT provides incorrect information, you correct it, and within 5 seconds of asking again, it repeat the same inaccurate information in its response.
It's definitely not artificial general intelligence, but it's for sure AI.
None of the criteria you mentioned are needed for it be labeled as AI. Definition from Oxford Libraries:
the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
It definitely fits in this category. It is being used in ways that previously, customer support or a domain expert was needed to talk to. Yes, it makes mistakes, but so do humans. And even if talking to a human would still be better, it's still a useful AI tool, even if it's not flawless yet.
i think learning where it can actually help is a bit of an art - it's just predictive text, but it's very good predictive text - if you know what you need and get good and giving it the right input it can save a huge about of time. you're right though, it doesn't offer much if you don't already know what you need.
More often than not you need to be very specific and have some knowledge on the stuff you ask it.
However, you can guide it to give you exactly what you want. I feel like knowing how to interact with GPT it’s becoming similar as being good at googling stuff.
I've been using it a little to automate really stupid simple programming tasks. I've found it's really bad at producing feasible code for anything beyond the grasp of a first-year CS student, but there's an awful lot of dumb code that needs to be written and it's certainly easier than doing it by hand.
As long as you're very precise about what you want, you don't expect too much, and you check its work, it's a pretty useful tool.
I've found it useful for basically finding the example code for a 3rd party library. Basically a version of Stack Exchange that can be better or worse.
I don't know you, the language you use, nor the way you use chat gpt, but I'm a bit surprised at what you say. I've been using chatgpt on a nearly daily basis for months now and while it's not perfect, if the task isn't super complicated and if it's described well, after a couple of back and forth I usually have what I need. It works, does what is expected, without being an horrendous way to code it.
My job involves a lot of shimming code in between systems that are archaic, in-house, or very specific to my industry (usually some combination of the three), so the problems I'm usually solving don't have much representation in gpt's training data. Sometimes I get to do more rapid prototyping/sandbox kind of work, and it's definitely much more effective there where I'm (a) using technologies that might pop up on stack overflow and (b) don't have a set of arcane constraints the length of my arm to contend with.
I'm absolutely certain that it's going to be a core part of my workflow in the future, either when the tech improves or I switch jobs, but for right now the most value I get out of it is as effectively a SO search tool.
I've played around with it for personal amusement, but the output is straight up garbage for my purposes. I'd never use it for work. Anyone entering proprietary company information into it should get a verbal shakedown by their company's information security officer, because anything you input automatically joins their training database, and you're exposing your company to liability when, not if, OpenAI suffers another data breach.
The very act of sharing company information with it can land you and the company in hot water in certain industries. Regardless if OpenAI is broken into.
A lot of people are going to get fucked if they are...
It's using the "startup method" where they gave away a good service for free, but they already cut back on resources when it got popular. So what you read about it being able to do six months ago, it can't do today.
Eventually they'll introduce a paid version that might be able to do what the free one did.
But if you're just blindly trusting it, you might have months of low quality work and haven't noticed.
Like the lawyers recently finding out it would just make up caselaw and reference cases. We're going to see that happen more and more as resources are cut back.
Huh? They already introduced the paid version half a year ago, and that was the one being responsible for the buzz all along. The free version was mediocre to begin with and has not gotten better.
When people complain that ChatGPT doesn’t comply to their expectations it’s usually a confusion between these two.
Like the lawyers recently finding out it would just make up caselaw and reference cases. We’re going to see that happen more and more as resources are cut back.
It’s been notorious for doing that from the very beginning though
This is exactly like people who defend Tesla by saying it's your fault if you believed their claims about what a Tesla can do...
Which isn't a surprise, there's a huge overlap between being gullible to believe either companies claims, and some people will vend over backwards to defend thos companies because of sink cost fallacy
That may have been their plan, but Meta fucked them from behind and released LLama which now runs on local machines, up to 30B parameter size and by end of the year will run at better than GPt3.5 ability on an iphone.
Local llms, like airoboros, WizardLm, Stable Vicuña or Stable Coder are real alternatives in many domains.
not chatGPT - but I tried using copilot for a month or two to speed up my work (backend engineer). Wound up unsubscribing and removing the plugin after not too long, because I found it had the opposite effect.
Basically instead of speeding my coding up, it slowed it down, because instead of my thought process being
Think about the requirements
Work out how best to achieve those requirements within the code I'm working on
Write the code
It would be
Think about the requirements
Work out how best to achieve those requirements within the code I'm working on
Start writing the code and wait for the auto complete
Read the auto complete and decide if it does exactly what I want
Do one of the following depending on 4
5a. Use the autocomplete as-is
5b. Use the autocomplete then modify to fix a few issues or account for a requirement it missed
5c. Ignore the autocomplete and write the code yourself
idk about you, but the first set of steps just seems like a whole lot less hassle then the second set of steps, especially since for anything that involved any business logic or internal libraries, I found myself using 5c far more often than the other two. And as a bonus, I actually fully understand all the code committed under my username, on account of actually having wrote it.
I will say though in the interest of fairness, there were a few instances where I was blown away with copilot's ability to figure out what I was trying to do and give a solution for it. Most of these times were when I was writing semi-complex DB queries (via Django's ORM), so if you're just writing a dead simple CRUD API without much complex business logic, you may find value in it, but for the most part, I found that it just increased cognitive overhead and time spent on my tickets
EDIT: I did use chatGPT for my peer reviews this year though and thought it worked really well for that sort of thing. I just put in what I liked about my coworkers and where I thought they could improve in simple english and it spat out very professional peer reviews in the format expected by the review form
As a side note, whilst I don't really use AI to help with coding, I was kinda expecting what you describe, more so for having stuff like ChatGPT doing whole modules.
You see, I've worked as a freelancer (contractor) most of my career now and in practice that does mostly mean coming in and fixing/upgrading somebody else's codebase, though I've also done some so-called "greenfield projects" (entirelly new work) and in my experience the "understanding somebody else's code" is a lot more cognitivelly heavy that "coming up with your own stuff" - in fact some of my projects would've probably gone faster if we just rewrote the whole thing (but that wasn't my call to make and often the business side doesn't want to risk it).
I'm curious if multiple different pieces of code done with AI actually have the same coding style (at multiple levels, so also software design approach) or not.
Urgh one of my coworkers (technically client, but work closely alongside) clearly uses it for every single email he sends, and it's nauseating. He's crass and very poorly spoken in person, yet overnight all his email correspondence is suddenly robotic and unnecessarily flowery. I use it regularly myself, for fast building of Excel formulas and so forth, but please, don't dump every email into it.
Why should anyone care? I don't go around telling people every time I use stack overflow. Gotta keep in mind gpt makes shit up half the time so I of course test and cross reference everything but it's great for narrowing your search space.
I did some programming assignments in a group of two. Every time, my partner sent me his code without further explanation and let me check his solution.
The first time, his code was really good and better than I could have come up with, but there was a small obvious mistake in there. The second time his code to do the same thing was awful and wrong. I asked him whether he used ChatGPT and he admitted it. I did the rest of the assignments alone.
I think it is fine to use ChatGPT if you know what you are doing, but if you don't know what you are doing and try to hide it with ChatGPT, then people will find out. In that case you should discuss with the people you are working with before you waste their time.
I've had partners like that in the past. If ChatGPT didn't exist they would've found another way to cheat or avoid work.
The type of partner who takes the task you asked them to complete, posts the task description on an online forum and hope someone gives them the answer.
Yes, LLMs are great as a research assistant if you know what to look for, but they're a horrible learning tool. It's even worse if you don't know the correct way to search for an answer, it will set you down a completely wrong path. I don't use any answer without cross referencing and testing it myself. I also rewrite most of the code it spits out too since a lot of it follows terrible programming patterns and outdated standards.
He should've at least looked at the code and tested it before sending it to you. Ugh. Hate doing assignments with people who do the bare minimum and just waste your time.
We've been instructed to use ChatGPT generically. Meaning, you ask it generic questions that have generic usage, like setting up a route in Express. Even if there is something more specific to my company, it almost always can be transformed into something more generic, like "I have a SQL DB with users in it, some users may have the 'age' field, I want to find users that have their age above 30" where age is actually something completely different (but still a number).
I've done so on rare occasion, but every time it made stuff up. Wanted terraform examples for specific things... and it completely invented resource types that don't exist.
Based on the code I've seen from our devs, it must be getting worse. It's never produced acceptable quality imo, but the examples I've seen lately are laughably bad.
It's definitely gotten weirder lately, I think they keep training it on data and they're not looking into that data enough. That's the thing about AI though, it has to come up with an answer. You gave it a prompt. It will come up that answer even if it's not a good one
This is exactly what I use it for. I have to write a lot of justifications for stuff like taking training, buying equipment, going on business travel, etc. - text that will never be seriously read by anyone and is just a check-the-box exercise. The quality and content of the writing is unimportant as long as it contains a few buzz-phrases.
Just chiming in as another person who does this, it's absolutely perfect. I just copy and paste the company bs competencies, add in a few bs thoughts of my own, and tell it to churn out a full review reinforcing how they comply with the listed competencies.
It's perfect, just the kinda bs HR is looking for, I get compliments all the time for them rofl.
It happens behind closed doors and never in writing to keep up the farce, but usually I'm given a paltry number of slots of people I can label as high performers. This is really a damn shame because most of my team members are great employees. This is used as a carrot to show that we do give raises and promotions after all, but the proportion is so small it's effectively zero. I'm very clear to my team that trying to becoming a top performer to get a promotion is a bad investment. I do my best to communicate the futility without actually saying it literally in such a way that it could get me into trouble.
Next, they use a spreadsheet to figure who they can probably underpay based on a heuristic likelihood that person would actually leave vs current market rates. These automatically become the low performers ahem satisfactory. You're penalized for being here longer or specializing in something with a small market. Everyone else falls somewhere between satisfactory and above average which makes little difference.
The performance reviews are merely weak documentation to show that somehow HR was "justified" by selectively highlighting strengths or weaknesses depending on the a priori decision of what your performance level was to be.
It's a huge tautology with only one meaningful conclusion: you will be underpaid, and it gets worse over time.
Only used it a couple of times for work when researching some broad topics like data governance concepts.
It’s a good tool for learning because you can ask it about a subject and then ask it to explain the subject “as a metaphor to improve comprehension” and it does a pretty good job. Just make sure you use some outside resources to ensure you’e not being hallucinated all over.
My supervisor uses ChatGPT to write emails to higher ups and it's kinda embarrassing lol. One email he's not even capitalizing or spell checking, and the next he has these emails are are over explaining simple things and are half irrelevant.
I've used it a couple times when I can't fully put into words that I'm trying to say, but I use it more for inspiration than anything. I've also used it once or twice in my personal life for translating.
Not at all. Had a few experiments, then we had a talk about it at work, decided fuck we're not giving these people our source code, and left it at that.
I mean in the end, all ChatGPT could reliably do was scavenge man-pages for me. Which is neat, but also a rather benign trick tbh.
I find it helpful to translate medical abbreviations to English. Our doctors tend to go overboard with abbreviations, there are lots I know but there are always a few that leave me scratching my head. ChatGPT seems really good at guessing what they mean! There are other tools I can use, but ChatGPT is faster and more convenient - I can give it context and that makes it more accurate.
I'm a devops engineer, use it daily. Not to write e-mails, but to frequently ask the best approach to solve an issue or bash/sql/anything queries. My boss and colleagues know about it and use it too though
I'm interested in finding ways to use it but when if I'm writing code I really like the spectrum of different answers on stack overflow with comment's on WHY they did it that way. Might use it for boring emails though.
re-builder in Emacs works really good for this because I'll usually have the text in a buffer already that I want to match or replace or adjust or select - I'm constantly using it.
I tried using it for coding couple of times and it wasn't very helpful. For simple stuff it's not much faster than looking at the docs directly. It's kind of a nice interface for them but it's nothing revolutionary. And it's not really much faster then just checking on ddg. For more complex things it often skips steps, references outdated libraries or just gives wrong answers.
re-builder in Emacs works really good for this because I'll usually have the text in a buffer already that I want to match or replace or adjust or select - I'm constantly using it.
I use it for help with formal language sometimes, but I do not trust it and would never try to pass off a whole generated text as mine. I always review it and try to make it sound my own.
I have very few writing tasks that don't require careful consideration, so it's not super useful in my day to day. But it can be helpful to get the ball rolling on an outline or first draft so I'm not staring at a blank sheet of paper.
Yesterday I was working on a training PowerPoint and it occurred to me that I should probably simplify the language. Had GPT convert it to 3rd-grade language, and it worked pretty well. Not perfect, but it helped.
I'm also writing an app as a hobby and, although GPT goes batshit crazy from time to time, overall it has done most of the coding grunt-work pretty well.
Last job, I ran simple tech support instructions through a reading analyser and also had instructions written for children. I even had screenshots with offensively large red arrows pointing at every step. I lliterally wrote those instructions so you don't have to read.
I've made PDF guides, videos they can reference, and even had personal training, and it baffles me how some people just can't pay attention. If it was something complicated, i would understand, but it is just simple stuff like resetting your password, opening apps, finding documents, etc.
It's always the older people too for some reason. I know this is supposed to be a gen z/alpha thing, but these older people have no attention span and have the memory of a fish. I know they didn't grow up with this stuff, but cmon it's literally a step by step guide
I use GPT-4 daily. I worked with it to create a quick and convenient app on my smartwatch, which allows it to provide wisdom and guidance fast whenever I need it. For more grandular things, I use its BingChat interface which can search the web and see images. The AI has helped me with understanding how to complete tasks, providing counseling for me, finding bugs in my code, writing functions, teaching me how to use software like Excel and Outlook, and giving me random information about various curiosities that pop into mind.
I don't keep it a secret and tell anyone who asks. Plus it's kinda obvious that something is going on with me. I always wear bone conducting headsets that allow the AI to whisper in my ear without shutting me out to the world, and sometimes talk to my watch
The responses to knowing what I'm doing have almost always been extreme: very positive or very negative. The machine is controversial, and when some can no longer stay in comfortable denial of its efficacy they turn to speaking out against its use
Edit: just fixed its translation method. Now the watch will hear non-english speech and automatically translate it for me too (uses Whisper API)
Lol no, that's just how I write. It's pretty wack sometimes; often a mix of slang and proper English. Prob because I read lots of nonfic books and am immersed in online culture
My whole team was playing around with it and for a few weeks it was working pretty well for a coupl3 of things. Until the answers started to become incorrect and not useful.
There was some issue that came up relating to network shares on a Windows domain that didn't make sense to me and a colleague. I asked GPT to describe why we were seeing whatever behavior and it defined the scope of the feature in a way that completely demystified my coworker. I'm a Mac and Linux guy, so while I could loosely grasp it, it was gone from my mind shortly after. Windows domains and file sharing has always been bizarre to me.
Anyway, we didn't hide it. He gave it credit when explaining the answer to the rest of the team in a meeting. This was around the end of last year. The company since had layoffs and I'm looking for a new job, but I did have it reformat my resume and it did a great job. I've never been great at page-layout stuff, as I'm a plain text warrior.
I use ChatGPT fairly frequently. For example, I often have to write a business email. I'm usually pretty good at it. But sometimes I don't have the time or desire to find the right wording. This is where ChatGPT comes into play: I have trained my writing style using several examples and then simply have the quickly written emails beautified.
My boss doesn't know about it, but I don't hide it either. My company is very, very slow on the technical side and will only understand the benefits of AI in a few years.
This is pretty much what I use it for too. Plus it was my ghost writer for sections of my website content and helped me update my bio for my proposal templates. I would never give it client information. But it sure is handy getting me over writer's block. I usually have it reword its answer 3 times and then I edit them together using the bits and pieces that work best.
I've found ChatGPT is good for small tasks that require me to code in languages I don't use often and don't know well. My prime examples are writing CMakeLists.txt files, and generating regex patterns.
Also, if I want to write a quick little bash script to do something, it's much better at remembering syntax and string handling tricks than me.
Coworker of mine admitted to using this for writing treatment plans. Super unethical and unrepentant about it. Why? Treatment plans are individual, and contain PII. I used it for research a few times and it returned sources that are considered bunk at best and hated within the community for their history. So I just went back to my journal aggregation.
I am the boss and I've had to cajole a couple of my employees into using it.
Any employer that thinks using ChatGPT carefully and judiciously is a bad thing is mistaken. When it works it's a great productivity boost, but you have to know when to kick it to the curb when it starts hallucinating.
I use it to speed up writing scripts on occasion, while attempting to abstract out any possibly confidential data.
I'm still fairly sure it's not allowed, however. But considering it would be easy to trace API calls and I haven't been approached yet, I'm assuming no one really cares.
i have used to to do simple shell scripts - like, "read a text file, parse out a semver, increment the minor version, set the last value to zero, write back out to the text file". simple stuff that can be easily stated it's pretty good at. mind you it was a bit wrong and i had to fix it, but it saved me googling commands and writing the script myself. I wouldn't have bothered normally but i do that once every two weeks so it's nice to just have a command to do it.
As a coder, we have had discussions about using it at work. Everyone's fine with it for generation of test data, or for generating initial code skeletons but it still requires us to review every line. It saves a bit of time but that's all.
I use it at work but gladly tell the boss... It's only pluses if we can do more trivial work faster. More time to relax. They don't watch what I do during the day. The boss relaxes also. All good.
I tried it once or twice and it worked well. It's too stupid now to be worth the attempt. The amount of time spent fixing its mistakes has resulted in net zero time savings.
Aside from asking it coding questions (which are generally a helpful pointer in the right direction), i also ask it alot of questions like “Turn these values into an array” or something similar when i have to make an array of values (or anything else that’s repetitive) and am too lazy to do it myself. Just a slight speedup in work.
I've run emails through it to check tone since I'm hilariously bad at reading tone through text, but I'm pretty limited in how I can make use of that. There's info I deal with that is sensitive and/or proprietary that I can't paste into just any text box without potential severe repercussions.
I use it and encourage my staff and other departments to use it.
I feel that we're at a horse vs tractor or human computer vs digital computer event. In the next 10+ years those who are AI ignorant will be under employed or unemployed. Get it now and learn to use it as a force multiplier just like tractors and digital computers were.
The arguments against AI eerily mirror the arguments against tractors and digital computers.
I suffer from the curse of the blank page, so getting something on the page to edit and expand is a lifesaver for me. It is also useful to adjust tone, and do simple things like document functions. Easy to correct if wrong.
I've used it a couple times to draft reports, most of what it writes is pretty garbage but it's good for generating general filter sentences and structure and stuff that I don't want to waste the time thinking about.
I've also used it to generate Facebook posts, it's awesome at this however recently I've had to make a point to telling it not to include emojis or the posts get overloaded
I absolutely kept it from my boss. The she told me in a 1:1 how extensively she uses it. I was like, hey I can help! Definitely haven’t told my VP though. Also then they blocked it, so I have to either use it in my iPad, or stick to Bard and BingAI on the laptop.
Proudly told my coworker about experiment with LLM to help with documentation we're pretty close from what we would need. I don't have yet the paygrade to do my own experiment on my work time but I am close enough to be able to start experimenting on my work time and tell my boss you see this is why I desserve that paygrade
As a manager, it does a great job of writing a bunch of ideas around a subject I need to explain that is not proprietary info. Turned writing a proposal that would have taken me hours to layout and format into just a few seconds with mere minutes tweaking to get just right.
Definately. ChatGPT for coding help, and learning new coding topics. And Gamma for presentations - if only for the nice formatting of content and stock imagery.
I run a board game store, so just for a chuckle I asked it about what's popular this year or what to order and kept getting the same answer about only having accurate data from 2021 and prior.
As long as it passes tests and CI, people will commit and push almost any kind of garbage it produces. I do the code reviews, and it turned into a circus after LLMs. People are extremely reluctant to fix "their code", because they don't understand it, and they also don't want to go back to basics and learn the fundamentals they were trying to skip by using the LLM in the first place.
I can't see anything positive about this development.
Yeah I use it, but only as a rubber duckie. I never put in code unless I understand what it's doing, and most of the time I'm just using it as a sounding board. Since it never returns the right code on the first try anyways haha
It's great at directing and narrowing your search, and when it knows, it does a great job. Problem is when it doesn't know it just makes shit up. I was using it earlier today to debug some error messages and it just came up with some non existent cli parameters. You still need to know what you're doing and test everything first.
I’m a family doctor, so I haven’t yet. It’s not a validated tool to source medical information, and I can’t paste any patient identifiers into it, so even if I wanted its input it’s way faster to just use my standard medical resources.
Our EMR plans to do some testing later this year for generative AI in areas that don’t have to be medically validated like notes to patients. I will likely sign up to pilot it if that option is offered.
I use it for D&D, though, along with a mixture of other tools, random generators, and my own homebrew. My players are aware of this.
I've used it in a few occasions, mostly to find better terms and adjusting the tone for my emails. Also finding what acronym stands for and understanding technical issues. Asking to explain like I'm a 5 yo or beginner saved me some time from doing long researches on google.
Used in small doses to generate text with some degree of precision is helpful. I do find it to be a good way to cut out boring email writing. But I would recommend it more as a text generation tool than a fact generation tool. With the right expectations and work flow it fits right in. And no I don't consider it plagiarism if the client's demand is boring.
I've used ai in general a few years ago as a companion till for writing seo optimized articles. It was ok at that time, and would do maybe 30% of the work I needed, but I would still have to go back in and make major edits or it would only pop out a sentence at a time so I would be contently prompting it.
My wife is a full time writer for a company and she uses it all the time to create emails and speeches. She says the leaks and bounds in actual usability is pretty insane. Like, one prompt can give her an entire speech.