Data:
{
"text": "Well, I'll be mostly working making my own CNNs, I think max 256 x 256, with at least 32 batch sizes. As for LLMs, I don't know yet, but for my diploma I would like to run at least something half as decent as LLama 3 8B. \n\\*edit: I don't know if there are any acceptable models that can fit into 8 gigs, but I guess I'd really have to go for much more VRAM for LLMs.",
"label": "r/deeplearning",
"dataType": "comment",
"communityName": "r/deeplearning",
"datetime": "2024-05-19",
"username_encoded": "Z0FBQUFBQm5Lakw0OS1SOXRHdTk0bl8tbmlzVllmZXlvc2U1TWJyaFpocFN2TTV0UURSa3BVNFBsUERwMXBneGZ4ZlM4RWFyVTJxOEcxMkR1QW40R1lSWmZOeldxMEVfTnI0VXNuNlg2YkJJZUZkbGV5QzZLcjA9",
"url_encoded": "Z0FBQUFBQm5Lak9IdVF4RUVhMUtzU1otWW1WNUF2S1B1Y25RMzZxRjlHQmlTY29UMlFuc2hWZXVDQUlNUEE4TlFFaEs3OHNNWS1ZMVZkLUdkWEhvdzZnRktMNDZlLTBlcjF6cXR4SzBYSlZfSzcwMktmTzlLOVpDd2c0QkRfek5Jc0h1WkRrX0R6WjgyQkxZYWJNU2tKVU5OQmlYdUkyZjk1OXlGSGwxQ0Flck16ZXc3YXZ6MEJrWW9tSkI3UkxsR3ZsOTNwdW5MaG5JYWV3bUkwaFhUVU8wdEQ5OGpjRTlBZz09"
}