LLaVA-Phi: The Training We Put It Through
#llms #llavaphi #clipvitl #llava15 #phi2 #supervisedfinetuning #sharegpt #trainingllavaphi
https://hackernoon.com/llava-phi-the-training-we-put-it-through
#llms #llavaphi #clipvitl #llava15 #phi2 #supervisedfinetuning #sharegpt #trainingllavaphi
https://hackernoon.com/llava-phi-the-training-we-put-it-through
Hackernoon
LLaVA-Phi: The Training We Put It Through
Our overall network architecture is similar to LLaVA-1.5. We use the pre-trained CLIP ViT-L/14 with a resolution of 336x336
LLaVA-Phi: Qualitative Results - Take A Look At Its Remarkable Generelization Capabilities
#llms #llavaphi #llava15 #scienceqa #whatisllavaphi #llavaphiqualitativeresults #mideagroup #visionlanguageassistant
https://hackernoon.com/llava-phi-qualitative-results-take-a-look-at-its-remarkable-generelization-capabilities
#llms #llavaphi #llava15 #scienceqa #whatisllavaphi #llavaphiqualitativeresults #mideagroup #visionlanguageassistant
https://hackernoon.com/llava-phi-qualitative-results-take-a-look-at-its-remarkable-generelization-capabilities
Hackernoon
LLaVA-Phi: Qualitative Results - Take A Look At Its Remarkable Generelization Capabilities
We present several examples that demonstrate the remarkable generalization capabilities of LLaVA-Phi, comparing its outputs with those of the LLaVA-1.5-13B