
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (75)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)
Sur d’autres sites (9807)
-
concat subsections of a video using ffmpeg [migrated]
30 mars 2013, par jefftimestenI would like to take several subsections of a video and concatenate them using the concat ffmpeg filter. It's just like the the example in the FFmpeg documentation, except that all of the clips are from the same source video.
Here is what I am trying :
ffmpeg \
-ss 1.0 -frames:v 20 -i myInput.mp4 \
-ss 2.0 -frames:v 20 -i myInput.mp4 \
-ss 3.0 -frames:v 20 -i myInput.mp4 \
-filter_complex '[0:0][0:1][1:0][1:1][2:0][2:1]concat=n=3:v=1:a=2[v][a1][a2]' \
-map '[v]' -map '[a1]' -map '[a2]' myOutput.mp4When I try this, I get the following error (full output here) :
Stream specifier ':0' in filtergraph description [0:0][0:1][1:0][1:1][2:0][2:1]concat=n=3:v=1:a=2[v][a1][a2] matches no streams.
A few things :
- What's with the error ? According to the stderr output, those streams do exist. What am I missing ?
- Shouldn't the -ss (and -frames:v) be reflected in the "Duration : ... start : ... " line of the stderr output when the inputs are listed ?
- Will the "-frames:v" option even work to specify the duration of an input ? (apparently -t is only for output ??) Is there a way to specify the duration of an input with seconds instead of frames ?
Help me LordNeckbeard — you're my only hope !
-
Apply sound effects on a video file
31 mars 2013, par talhamalik22I am a little miss guided here and it seems i am totally lost. I am developing an android app and its core idea is to develop a video recorder and video player that applies some sound effects on the voice of the people or any sound that it records. Sound effect means that if i make a video of a person who is giving some speech then there should be no effect on video but his/her voice should appear like voice in talking tom cat app. I hope you understand the idea. Similar app is Helium Booth you can check it here. I am trying to use libraries like libSonic, libpd and tried to use XUGGLE too.
Read somewhere that Xuggle is not really developed for mobile devices so left it. Now what i want is that it should apply this effect on voice on the run time i.e while recording the pitch of the sound should be alterd and saved immediately. And what i am getting with these libraries is that i can apply sound effect after video is recorded. So it means i need to rip the audio from the video and then apply the change in pitch and frequency and again concatenate this audio file with the old video file. And i have no idea how to do it.
Please show me the right approach and tools if possible.Regards
-
ffmpeg in child process doesn't exit when parent closes pipe
28 mars 2013, par user2221129This code snippet is from a pthread process. It is responsible for reading configuration/options to pass to ffmpeg. The data piped to ffmpeg is coming in on a ring buffer of video frames (as a proof of concept, in the final implementation these would be coming from a camera device). The parent writes to the child via a pipe established with dup2. The problem I'm having is that most of the time, the ffmpeg process doesn't seem to recognize the pipe has been closed by the parent (as strace shows the ffmpeg is in read). There are probably some unhandled mutex situations.
#define CHILD_READ writepipe[0]
#define PARENT_WRITE writepipe[1]
#define MAX_VIDEO_BUFFERS 30
static int last_frame_written = -1;
static int buffer_size = -1;
static void *video_buffer[MAX_VIDEO_BUFFERS];
#define MAXTHREADS 2
static pthread_t thread[MAXTHREADS];
static int sigusr[MAXTHREADS];
static int sigusr_thread[MAXTHREADS];
#define MAXPARAMETERS 100
void* encoder_thread(void *ptr)
{
int param;
const char **parameters = new const char* [MAXPARAMETERS];
ssize_t bytes_written_debug = 0;
int writepipe[2] = {-1,-1};
int last_frame_read = -1;
pid_t childpid;
// xxx
if ( 0 != pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL) )
{
fprintf(stderr,"pthread_setcancelstate could not be set\n");
}
param = *(int*)ptr;
fprintf(stderr,"param = %d\n", param);
switch (param)
{
...read config file and populate parameters for child call...
}
while ( last_frame_written == -1 )
{
fprintf(stderr,"ENCODER THREAD WAITING\n");
sleep(1);
}
if ( pipe(writepipe) < 0 )
{
...handle error case...
}
if ( (childpid = fork()) < 0)
{
fprintf(stderr,"child failed\n");
}
else if ( 0 == childpid ) /* in the child */
{
dup2(CHILD_READ, STDIN_FILENO);
//close(CHILD_READ); // doesn't seem to matter
close(PARENT_WRITE);
execv("/usr/bin/ffmpeg",(char **)parameters);
return;
}
else /* in the parent */
{
fprintf(stderr,"THREAD CHILD PID: %d\n",childpid);
close(CHILD_READ);
while ( last_frame_written > -1 && 0==sigusr_thread[param] )
{
if ( last_frame_read != last_frame_written )
{
// send frame to child (ffmpeg)
last_frame_read = last_frame_written;
bytes_written_debug = write(PARENT_WRITE,video_buffer[last_frame_read], buffer_size);
if ( bytes_written_debug < 0 )
{
fprintf(stderr, "write error\n");
break;
}
}
usleep(10000); // assume ~100Hz rate
}
/// Problems begin now: no more data from main process,
/// Shouldn't this close "close" the pipe created with dup2 and hence EOF ffmpeg?
if ( close(PARENT_WRITE) )
{
fprintf(stderr,"\n\nclose failure! wtf?\n\n\n");
}
// debug sleep:
// waiting 10 sec doesn't seem to help...
// trying to give ffmpeg some time if it needs it
sleep(10);
// kill never fails here:
if ( !kill(childpid,SIGINT) )
{
fprintf(stderr, "\n\nkill child %d\n", childpid);
}
else
{
fprintf(stderr, "\n\nkill failed for child %d\n", childpid);
}
fprintf(stderr,"\n\nwaiting\n\n\n");
int status = 0;
// wait forever (in most cases); ps -axw shows all the ffmpeg's
wait(&status);
}
}
int main()
{
...
for ( int i = 0; i < MAXTHREADS; ++i)
{
sigusr[i] = 0;
sigusr_thread[i] = 0;
param[i] = i;
iret = pthread_create( &thread[i], NULL, encoder_thread, (void*) &param[i] );
}
...
// Maybe this is important to the pthread signaling?
// Don't think so because thread's SIGINT isn't picked up by signal handler (not shown)
sigemptyset(&set);
sigaddset(&set, SIGHUP);
sigaddset(&set, SIGINT);
sigaddset(&set, SIGUSR1);
sigaddset(&set, SIGUSR2);
sigaddset(&set, SIGALRM);
/* block out these signals */
sigprocmask(SIG_BLOCK, &set, NULL);
...read file/populate frame buffer until file exhausted...
fprintf(stderr, "waiting for threads to exit\n");
for ( int i = 0; i < MAXTHREADS; ++i)
{
sigusr_thread[i] = 1;
pthread_join( thread[i], NULL);
}
fprintf(stderr, "done waiting for threads to exit\n");
...
return 0;
}I've embedded comments and questions in the thread's parent code after the frame buffer read/write loop.
Hope I've provided enough detail - stackoverflow posting noob ! Thank you.