FREE GUIDE #182

An exclusive guide to using LLMs as your personal research assistant

The first and most important rule is this:

Never ask an LLM for information you can't validate yourself, or to do a task that you can't verify has been completed correctly.

The one exception is when it's not a crucial task. For instance, asking an LLM for apartment decorating ideas is fine.

Bad: "Using literature review best practices, summarize the research on breast cancer over the last ten years."

This is a bad request because you can't directly check if it's summarized the literature correctly.

Better: "Give me a list of the top review articles on breast cancer research from the last 10 years."

This is better because you can verify that the sources exist and vet them yourself and of course, they are authored by human experts.

TIPS FOR WRITING PROMPTS:

It's pretty easy to ask an LLM to write code or find relevant information for you, but the quality of the responses can vary widely. Luckily, there are things you can do to improve the quality.

SET THE CONTEXT:

  • Tell the LLM explicitly what information it should be using

  • Use terminology and notation that biases the LLM towards the right context

If you have thoughts about how to approach a request tell the LLM to use that approach.

 "Solve this inequality."

 "Solve this inequality using the Cauchy-Schwarz theorem followed by an application of completing the square."

These models are a lot more linguistically sophisticated than you might imagine.

Even extremely vague guidelines can be helpful.

BE SPECIFIC:

This isn't Google. You don't have to worry about whether there's a website that discusses your exact problem.

 "How do I solve a simultaneous equation involving quadratic terms?"

 "Solve x=(1/2)(a+b) and y= (1/3) (a^2+ab+b^2 for a and b" for a and b"

DEFINE YOUR OUTPUT FORMAT:

Take advantage of the flexibility of LLMs to format the output in the way that's best for you such as:

  • Code

  • Mathematical formulas

  • An essay

  • A tutorial

  • Bullet points

You can even ask for code that generates:

  • Tables

  • Plots

  • Diagrams

Once you have the output of an LLM, that is only the beginning.

YOU NEED TO VALIDATE THE RESPONSE.

This includes:

  • Finding inconsistencies

  • Googling terminology in the response to get supporting sources

  • Where possible, generate code to test the claims yourself

LLMs often make weird mistakes that are inconsistent with their seeming level of expertise.

For instance, an LLM might mention an extremely advanced mathematical concept yet fumble over simple algebra.

This is why you need to CHECK EVERYTHING.

USE THE ERRORS TO  GENERATE FEEDBACK:

  • If you see an error or inconsistency in the response, ask the LLM to explain it

  • If the LLM generates code with bugs, cut and paste the error messages into the LLM window and ask for a fix

ASK MORE THAN ONCE:

LLMs are random. Sometimes simply starting a new window and asking your question again can give you a better answer.

USE MORE THAN ONE LLM.

I currently use Bing AI, GPT-4, GPT-3.5, and Bard AI depending on my needs. They have different strengths and weaknesses.

In my experience, it's good to ask GPT-4 and Bard AI the same math questions to get different perspectives. Bing AI is good for web searches. GPT-4 is significantly smarter than GPT-3.5 (like a student at the 90th percentile vs the 10th percentile) but harder to access (for now).

REFERENCES

References are an especially weak point for LLMs. Sometimes, the references an LLM gives you exist and sometimes they don't.

The fake references aren't completely useless. In my experience, the words in the fake references are usually related to real terms and researchers in the relevant field. So googling these terms can often get you closer to the information you're looking for.

Additionally, Bing AI is designed to find web references, so it's also a good option when hunting for sources.

PRODUCTIVITY

There are a lot of unrealistic claims that LLMs can make you 10x or even 100x more productive. In my experience, that kind of speedup only makes sense in cases where none of the work is being double-checked which would be irresponsible for me as an academic.

However, specific areas where LLMs have been a big improvement on my academic workflow are:

  • Prototyping ideas

  • Identifying dead-end ideas

  • Speeding up tedious data reformatting tasks

  • Learning new programming languages, packages, and concepts

  • Google searches

I spend less time stuck on what to do next. LLMs help me advance even vague or partial ideas into full solutions.

LLMs also reduce the time I spend on distracting and tedious side tasks unrelated to my main goal. I find that I enter a flow state and I'm able to just keep going. This means I can work for longer periods without burnout.

One final word of advice: Be careful not to get sucked into side projects. The sudden increase in productivity from these tools can be intoxicating and can lead to a loss of focus.

Thanks for reading. See you next time! Uff uff 🐶