
Recherche avancée
Médias (2)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (61)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (14569)
-
mpeg2 ts android ffmpeg openmax
10 octobre 2014, par WLGfxThe setup is as follows :
- Multicast server 1000Mbs, UDP, Mpeg2-TS Part 1 (H.222) streaming live TV-channels.
- Quad core 1.5Ghz Android 4.2.2 GLES 2.0 renderer.
- FFMpeg library.
- Eclipse Kepler, Android SDK/NDK, etc. Running on Windows 8.1.
- Output screen 1920 x 1080, I am using a texture of 2048 x 1024 and getting between 35 and 45 Frames per second.
The app :
- Renderer thread runs continuously and updates a single texture by uploading segments to the gpu when media images are ready.
- Media handler thread, downloads and processes media from server/or local storage.
- Video thread(s), one for buffering the UDP packets and another for decoding the packets into frames.
I am connecting ffmpeg to the UDP stream just fine and the packets are being buffered and seemingly decoded fine. The packet buffers are plenty, no under/over-flows. The problem I am facing is it appears to be chopping up frames (ie only playing back 1 out of every so many frames). I understand that I need to distinguish I/P/B frames, but at the moment, hands up, I ain’t got a clue. I’ve even tried a hack to detect I frames to no avail. Plus, I am only rendering the frames to less than a quarter of the screen. So I’m not using full screen decoding.
The decoded frames are also stored in separate buffers to cut out page tearing. The number of buffers I’ve changed too, from 1 to 10 with no luck.
From what I’ve found about OpenMax IL, is it only handles MPeg2-TS Part 3 (H.264 and AAC), but you can use your own decoder. I understand that you can add your own decode component to it. Would it be worth me trying this route or should I continue on with ffmpeg ?
The frame decoder (only the renderer will convert and scale the frames when ready)
/*
* This function will run through the packets and keep decoding
* until a frame is ready first, or out of packets
*/while (packetsUsed[decCurrent])
{
hack_for_i_frame:
i = avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packets[decCurrent]);
packetsUsed[decCurrent] = 0; // finished with this one
i = packets[decCurrent].flags & 0x0001;
decCurrent++;
if (decCurrent >= MAXPACKETS) decCurrent = 0;
if (frameFinished)
{
ready_pFrame = pFrame;
frameReady = true; // notify renderer
frameCounter++;
if (frameCounter>=MAXFRAMES) frameCounter = 0;
pFrame = pFrames[frameCounter];
return 0;
}
else if (i)
goto hack_for_i_frame;
}
return 0;The packet reader (spawned as a pthread)
void *mainPacketReader(void *voidptr)
int res ;while ( threadState == TS_RUNNING )
{
if (packetsUsed[prCurrent])
{
LOGE("Packet buffer overflow, dropping packet...");
av_read_frame( pFormatCtx, &packet );
}
else if ( av_read_frame( pFormatCtx, &packets[prCurrent] ) >= 0 )
{
if ( packets[prCurrent].stream_index == videoStream )
{
packetsUsed[prCurrent] = 1; // flag as used
prCurrent++;
if ( prCurrent >= MAXPACKETS )
{
prCurrent = 0;
}
}
// here check if the packet is audio and add to audio buffer
}
}
return NULL;And the renderer just simply does this
// texture has already been bound before calling this functionif ( frameReady == false ) return;
AVFrame *temp; // set to frame 'not' currently being decoded
temp = ready_pFrame;
sws_scale(sws_ctx,(uint8_t const* const *)temp->data,
temp->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize);
glTexSubImage2D(GL_TEXTURE_2D, 0,
XPOS, YPOS, WID, HGT,
GL_RGBA, GL_UNSIGNED_BYTE, buffer);
frameReady = false;In the past, libvlc had audio syncing problems too, so that is my decision for going with ffmpeg and doing all the donkey work from scratch.
If anybody has any pointers of how to stop the choppiness of the video playback (works great in VLC player) or possibly another route to go down, it would be seriously appreciated.
EDIT I removed the hack for the I-frame (completely useless). Move the sws_scale function from the renderer to the packet decoder. And I left the udp packet reader thread alone.
In the meantime I’ve also changed the packet reader thread and the packet decoder threads priority to real-time. Since doing that I don’t get shed loads of dropped packets.
-
How to queue ffmpeg FIFO
29 avril 2013, par Francoiswe build a service similar to youtube. Also converting runs fine with ffmpeg using this script from another post here :
#!/bin/bash
pipe=/tmp/ffmpeg
trap "rm -f $pipe" EXIT
# creating the FIFO
[[ -p $pipe ]] || mkfifo $pipe
while true; do
# can't just use "while read line" if we
# want this script to continue running.
read line < $pipe
# now implementing a bit of security,
# feel free to improve it.
# we ensure that the command is a ffmpeg one.
[[ $line =~ ^ffmpeg ]] && bash <<< "$line"
doneThis works pretty good when i send one by one to the named pipe. When i send more than one at same time the second one queues the terminal to the point the first one finished. if a try more than 2 the third one will not be transcoded.
So i tried to workaround with background sending to get the terminal free (just drop the echo command and close the ssh connection) but this doesn't work, then i played around with screen -X but also no luck. Maybe someone has a good idea to deal this.
What i wanna do is : Every uploaded video which is needed to transcode will send a echo to the named pipe. FIFO should match but not blocking the terminal. So i think i need something to really queue ffmpeg input.
kindest regards
Francois -
Video call between eyebeam and baresip SIP clients
7 décembre 2012, par user1490337I am trying to achieve video call on 2 SIP clients
- Baresip
- Eyebeam
Till now I have succeeded in getting audio stream both ways but the video stream is one way i.e iam getting the stream at the baresip terminal but I cannot see video at the EYEBEAM terminal. I can't understand where I am going wrong.
The eyebeam is sending STAP-a and Fu-a packets to eyebeam as I checked it through wireshark. But baresip is not sending any STAP-a and Fu-a packets to eyebeam hence no video.. Both the clients support H.264.
Pointers in the right direction are welcome.