LLM Training With PHP Improved Using Txt Datasets!

Hi guys how are you doing?
last week I share my first version of this simple Languaje model training with php.

For thoose who missed, it use a simple Markov Chain for calculate the probabilities for the next word based on the previous words.

Now I have improved the training dataset and the next word selector.

Here’s is the link:

https://github.com/AcidBurn86/LM-nGram-with-php/

is a good way to start understand how big LLM works. And of course I know this could never perform like GPT or Llama.

Is just an educational code for php fans.

Shares and github stars are welcome!

submitted by /u/OficialPimento
[link] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *