
Recherche avancée
Médias (2)
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (96)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (9227)
-
ffmpeg produces mp4 I cannot load on latest Safari on iOS
28 mars 2016, par Michael HeubergerWhen I encode a video with these ffmpeg parameters (based on images), I cannot play that mp4 it produces on latest Safari (inside a tag) inside my iPhone 6s using the latest iOS too :
ffmpeg
-r 15.279071668502123
-f image2 -thread_queue_size 64
-i /home/michael-heuberger/abcd/frames/%d.webp
-y
-an
-vcodec libx264
-vf scale=trunc(iw/2)*2:trunc(ih/2)*2
-crf 16
-preset fast
-profile:v baseline
-pix_fmt yuv420p
-loglevel warning
-movflags faststart
/home/michael-heuberger/abcd/videomail_good.mp4I think the above parameters should be solid. I am adding baseline and yuv420p, yet no luck on Safari 9. Why ?
This makes it difficult for me to play videomails recorded on www.videomail.io on iOS devices.
Am I missing something here ? Already did lots of research and tried various combinations, no luck.
If you want to reproduce that, easy : just record a video on www.videomail.io and after that, copy the link of the recorded video page to your iPhone or just download it for local investigation.
Any clues very welcome !
-
How should set the video & audio's timestamp when using ffmpeg to publish rtmp stream to nginx-rtmp server
8 juillet 2016, par user1190248Now I use ffmpeg to publish my av stream to nginx-rtmp server.
My h264 stream is 30fps, AAC is 8khz stereo channels.
In the packet sending function, I use following code to read frame,set audio & video’s timestamp.void RtmpLiveEncoder::Run()
{
AVBitStreamFilterContext* aacbsfc = av_bitstream_filter_init("aac_adtstoasc");
start_time = av_gettime();
while(1)
{
do
{
int ret = 0;
AVPacket pkt;
av_init_packet(&pkt);
ret = av_read_frame(ifmt_ctx,&pkt);
if(ret<0 )
{
printf("read video frame failed\n");
break;
}
if(pkt.pts==AV_NOPTS_VALUE)
{
if(_frameduration==0)
{
pkt.dts = pkt.pts=(av_gettime()-start_time)/1000;
}
else
{
pkt.dts = pkt.pts = _lastvideopts;
pkt.duration = _frameduration;
pkt.pos = -1;
_lastvideopts += _frameduration;
}
}
if(av_write_frame(ofmt_ctx,&pkt)<0)
{
printf("write video frame failed\n");
}
av_packet_unref(&pkt);
}while(0);
do
{
if(!_hasaudio)
{
break;
}
if((_lastaudiopts-_lastvideopts)>0)
{
printf("the audio is faster than video, the audio pts is %d, the video pts is %d\n",_lastaudiopts,_lastvideopts);
break;
}
int ret = 0;
AVPacket audiopacket;
av_init_packet(&audiopacket);
ret = av_read_frame(aifmt_ctx,&audiopacket);
if(ret<0)
{
break;
}
AVStream* out_stream = ofmt_ctx->streams[1];
if(av_bitstream_filter_filter(aacbsfc, out_stream->codec, NULL, &audiopacket.data, &audiopacket.size, audiopacket.data, audiopacket.size, 0)<0)
{
printf("remove adts header failed\n");
}
if(av_bitstream_filter_filter(aacbsfc, out_stream->codec, NULL, &audiopacket.buf->data, &audiopacket.buf->size, audiopacket.buf->data, audiopacket.buf->size, 0)<0)
{
printf("remove adts header failed\n");
}
audiopacket.stream_index=1;
audiopacket.dts = audiopacket.pts=_lastaudiopts;
audiopacket.duration = (double)1024/out_stream->codecpar->sample_rate*1000;
_lastaudiopts += audiopacket.duration;
audiopacket.pos = -1;
if(av_write_frame(ofmt_ctx,&audiopacket)<0)
{
printf("write audio failed.\n");
}
av_packet_unref(&audiopacket);
}while(0);
}
av_write_trailer(ofmt_ctx);
}as you can see, I set the audio&video’s pts by frameduration.
Then I use vlc or iphone’s chrome to access the hls stream. At the first, the audio&video is synchronized.But as time goes on, the AV isn’t synchronized,the audio would be faster than video’s.
So what’s wrong with it ? -
Texture rendering on iOS using OpenGL ES in Unity project
30 mars 2016, par Time1essI’m working on a project, part of which is to stream video to my iPhone. I use my laptop to create the video stream to my iPhone with ffmpeg.
The stream code in shell is below :
ffmpeg \
-f avfoundation -i "1" -s 1280*720 -r 29.97 \
-c:v mpeg2video -q:v 20 -pix_fmt yuv420p -g 1 -threads 4\
-f mpegts udp://192.168.1.102:6666with this, I successfully create my video stream.
In Unity, I want to decode the video stream to create a texture. After I have gone through some ffmpeg tutorial and Unity tutorial, I created my link library. Some of these codes are below (ask me if more is needed) :
In my library :
buffer alloc :
uint8_t *buffer;
int buffer_size;
buffer_size = avpicture_get_size(AV_PIX_FMT_RGBA, VIEW_WIDTH, VIEW_HEIGHT);
buffer = (uint8_t *) av_malloc(buffer_size*sizeof(uint8_t));
avpicture_fill((AVPicture *) pFrameRGB, buffer, AV_PIX_FMT_RGBA,
VIEW_WIDTH, VIEW_HEIGHT);getContext :
is->sws_ctx = sws_getContext
(
is->video_st->codec->width,
is->video_st->codec->height,
is->video_st->codec->pix_fmt,
VIEW_WIDTH,
VIEW_HEIGHT,
AV_PIX_FMT_RGBA,
SWS_BILINEAR,
NULL,
NULL,
NULL
);sws_scale :
sws_scale(
is->sws_ctx,
(uint8_t const * const *)pFrame->data,
pFrame->linesize,
0,
is->video_st->codec->height,
pFrameRGB->data,
pFrameRGB->linesize
);texture render :
static void UNITY_INTERFACE_API OnRenderEvent(int texID)
{
GLuint gltex = (GLuint)(size_t)(texID);
glBindTexture(GL_TEXTURE_2D, gltex);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, VIEW_WIDTH, VIEW_HEIGHT,
GL_RGBA, GL_UNSIGNED_BYTE, pFrameRGB->data[0]);
glGetError();
return;
}
extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API GetRenderEventFunc()
{
return OnRenderEvent;
}In Unity :
texture created :
private Texture2D texture;
private int texID;
texture = new Texture2D (width, height, TextureFormat.RGBA32, false);
texture.filterMode = FilterMode.Point;
texture.Apply ();
GetComponent<renderer> ().material.mainTexture = texture;
texID = texture.GetNativeTexturePtr ().ToInt32();
</renderer>update func :
void Update ()
{
GL.IssuePluginEvent(GetRenderEventFunc(), texID);
}Video stream info :
Input #0, mpegts, from 'udp://0.0.0.0:6666':
Duration: N/A, start: 2.534467, bitrate: N/A
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv), 1280x720 [SAR 1:1 DAR 16:9], max. 104857 kb/s, 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbcLeave other details, my library works fine on the Unity simulator, but when I compiled all my libraries for arm64 and used the xcode project that Unity created to build my app and ran it, I couldn’t get any texture rendered in my iPhone, I checked my network and I’m sure that data had been sent to my iPhone and the Debug log showed me that frame has been successfully decoded also the
OnRenderEvent
function had been called.FYI :
Unity 5.3.2f1 Personal
Xcode 7.2.1
iOS 9.2.1
ffmpeg 3.0