LLaVA-Phi: The Training We Put It Through
#llms #llavaphi #clipvitl #llava15 #phi2 #supervisedfinetuning #sharegpt #trainingllavaphi
https://hackernoon.com/llava-phi-the-training-we-put-it-through
#llms #llavaphi #clipvitl #llava15 #phi2 #supervisedfinetuning #sharegpt #trainingllavaphi
https://hackernoon.com/llava-phi-the-training-we-put-it-through
Hackernoon
LLaVA-Phi: The Training We Put It Through
Our overall network architecture is similar to LLaVA-1.5. We use the pre-trained CLIP ViT-L/14 with a resolution of 336x336
LLaVA-Phi: Limitations and What You Can Expect in the Future
#llms #llavaphi #whatisllavaphi #llavaphilimitations #futureofllms #phi2 #llavaphiarchitecture #mideagroup
https://hackernoon.com/llava-phi-limitations-and-what-you-can-expect-in-the-future
#llms #llavaphi #whatisllavaphi #llavaphilimitations #futureofllms #phi2 #llavaphiarchitecture #mideagroup
https://hackernoon.com/llava-phi-limitations-and-what-you-can-expect-in-the-future
Hackernoon
LLaVA-Phi: Limitations and What You Can Expect in the Future
We introduce LLaVA-Phi, a vision language assistant developed using the compact language model Phi-2.