I asked chat gpt how to automate transfers from funding account to saving goal account

Elias1954
Elias1954 Quicken Windows Subscription Member ✭✭

The 4th step of the instructions was impossible to do it asked tha I choose the specific goal that I want the auto transfer to go. Well there is no way I can choose it The oly thing I can do is edit it and nothing else that is mantioned in steps 5 thru 7

My version of Quicken is build 27.151.10 Can any one explain where I went wrong?

Best Answer

  • GeoffG
    GeoffG Quicken Windows Subscription SuperUser ✭✭✭✭✭
    edited August 2023 Answer ✓

    Assuming you have goals setup, the only way to automate transfers to/from is to create transfer reminders for the account involved.

    I also have the reminder auto post so this process is totally autonomous.

Answers

  • NotACPA
    NotACPA Quicken Windows Subscription SuperUser ✭✭✭✭✭

    What Q product are you running?

    Q user since February, 1990. DOS Version 4
    Now running Quicken Windows Subscription, Business & Personal
    Retired "Certified Information Systems Auditor" & Bank Audit VP

  • GeoffG
    GeoffG Quicken Windows Subscription SuperUser ✭✭✭✭✭
    edited August 2023 Answer ✓

    Assuming you have goals setup, the only way to automate transfers to/from is to create transfer reminders for the account involved.

    I also have the reminder auto post so this process is totally autonomous.

  • Elias1954
    Elias1954 Quicken Windows Subscription Member ✭✭

    All set thank you, however Why did the GPT instructions generate a n answer that was not workable. I struggled for ahalf hour to figure out what was wrong .

  • MHSwizzleStick
    MHSwizzleStick Quicken Windows Subscription Member ✭✭✭✭

    Just do a Google search for "ChatGPT makes up facts". Its not a reliable source of information.

  • Chris_QPW
    Chris_QPW Quicken Windows Subscription Member ✭✭✭✭

    When people talk about the dangers of AI they usually talk about AI somehow taking over the world and killing humans off, but that isn't really the dangerous part of AI (at least not yet). The real dangers are people "humanizing it" and believing that is more than a powerful "echo chamber" that can look up facts really quickly, from trusted and untrusted sources, and return what you want to hear. And that it is some all-powerful being that will always give them the best answer.

    Paint some eyes on a grapefruit and people will start to think of it as human, same with giving a machine a voice. Alexa is "she" when it should be "it".

    Computers are very good at looking things up, but don't have "common sense" or the ability to really understand what is being shown to you. GPT will feed back to you what you want to hear (an echo chamber), and it doesn't fact check its results.

    Some of the biggest fails in AI so far show some of its limitations. The Internet is sort of an example of these naive beliefs even/especially from the developers. "What the world needs is a way for the average person to publish whatever they like, and the sharing of information will be great" (Social media). Of course, they didn't count on how terrible some people can be, especially when they can post anonymously. When they put out the first AI "chatbots" that would learn from the input of people, they were usually shutdown within 24 hours, because people purposely feed it hate/inaccurate information. The newer ones have to have limitations on what they will pick up from you but make no mistake the purpose of "learning" is to "form prejudice". It will learn whatever the teacher is teaching, even if the teacher doesn't realize that they are teaching a given thing.

    For instance, it was found that face recognition more "prejudice" for dark skin individuals. The reason is because what was being used to decide if this was this person or not was harder to pick up on dark skin individuals. So more "false positives" were generated with dark skin individuals, but as far as the algorithm was concerned it gave a high percentage that this was that person.

    In a lot of cases the developers don't even know what the AI algorithm is learning or why it came up with a given answer. In some cases, this can lead to surprisingly good results that humans haven't thought of, but at other times it can be total garbage. It is up to the humans to decide which it is. This really isn't any different from people believing whatever is posted on the Internet without properly checking it out. But because it is coming from AI, they believe without checking. That is the danger.

    Signature:
    This is my website: http://www.quicknperlwiz.com/
This discussion has been closed.