Recherche avancée

Médias (10)

Mot : - Tags -/wav

Autres articles (67)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • D’autres logiciels intéressants

    12 avril 2011, par

    On ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
    La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
    On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
    Videopress
    Site Internet : (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (6821)

  • ffmpeg produces bad output when called from execve in c++

    31 mai 2016, par Arheisel

    im writing a c++ program that involves ffmpeg being called from a c++ program. a couple of days ago i got it working using std::system

    std::system("ffmpeg -threads auto -y -r 1.74659 -i /mnt/ev_ramdsk/1/%05d-capture.jpg -px_fmt yuv420p -preset ultrafast -r 10 /mnt/ev_ramdsk/1/video.mp4");

    but this only worked once now this produces .mp4 videos of 8MB or so that cannot be played anywhere.. so because of a suggestion in a previous question i moved to execve.

    Here is my code

    child_pid = fork();
           if(child_pid < 0){
               syslog(LOG_ERR, "ERROR: ffmpeg forking failed");
               return false;
           }
           else if(child_pid > 0){
               syslog(LOG_DEBUG, "DEBUG: forking succeeded, pid: %d", child_pid);
           }
           else if(child_pid == 0){
               char *newargv[16];
               for(int i=0; i < 15; i++) newargv[i] = (char *)malloc(sizeof(char) * 60); //allocate the array in memory
               strcpy(newargv[0], "/usr/bin/ffmpeg");
               strcpy(newargv[1], "-threads");
               strcpy(newargv[2], "auto");
               strcpy(newargv[3], "-y");
               strcpy(newargv[4], "-framerate");
               tempSS << fps;
               strcpy(newargv[5], tempSS.str().c_str());
               tempSS.str(std::string());
               strcpy(newargv[6], "-i");
               strcpy(newargv[7], std::string(conf->dir_ram + dest + "%05d-capture.jpg").c_str());
               strcpy(newargv[8], "-pix_fmt");
               strcpy(newargv[9], "yuv420p");
               strcpy(newargv[10], "-preset");
               strcpy(newargv[11], "ultrafast");
               strcpy(newargv[12], "-r");
               strcpy(newargv[13], "25");
               strcpy(newargv[14], std::string(conf->dir_ram + dest + "video.mp4").c_str());
               newargv[15] = NULL;

               for(int i=0; i < 15; i++){
                   tempSS << "newargv[" << i << "] = \"" << newargv[i] << "\", ";
               }
               syslog(LOG_DEBUG, "DEBUG:newargv: %s", tempSS.str().c_str());
               tempSS.str(std::string());

               char *newenviron[] = { NULL };

               if(execve(newargv[0], newargv, newenviron) == -1){
                   syslog(LOG_ERR, "ERROR: execve returned -1");
                   exit(EXIT_SUCCESS);
               }
           }

           wpid = wait(&status);
           syslog(LOG_DEBUG, "DEBUG: ffmpeg child terminated, pid: %d, status: %d", wpid, status);

    the output of syslog is :

    May 28 00:25:03 SERVER dt_ev_maker[10471]: DEBUG: forking succeeded, pid: 10658
    May 28 00:25:03 SERVER dt_ev_maker[10658]: DEBUG:newargv:
    newargv[0] = "/usr/bin/ffmpeg",
    newargv[1] = "-threads",
    newargv[2] = "auto",
    newargv[3] = "-y",
    newargv[4] = "-framerate",
    newargv[5] = "1.45097",
    newargv[6] = "-i",
    newargv[7] = "/mnt/ev_ramdsk/1/%05d-capture.jpg",
    newargv[8] = "-pix_fmt",
    newargv[9] = "yuv420p",
    newargv[10] = "-preset",
    newargv[11] = "ultrafast",
    newargv[12] = "-r",
    newargv[13] = "25",
    newargv[14] = "/mnt/ev_ramdsk/1/video.mp4",
    May 28 00:25:03 SERVER dt_ev_maker[10471]: DEBUG: ffmpeg child terminated, pid: 10658, status: 256

    in this case the video has about 90B size and is also corrupted.

    NOTE : if i run the same command from the command line the video can be played normally.

    what am i doing wrong ?

    Thanks in advance !

    EDIT

    Thanks to ouroborus (changes submitted above) i got it to make 18MB videos, but i cant play them.

  • What is the best way to Stream multiple CCTV RTSP into Node.js

    1er décembre 2020, par Borneo Dev

    I am trying to start to build a node.js application that is going to receive RTSP video streams from IP CCTV cameras.

    


    I have spent considerable amount of time studying opencv and ffmpeg as the tool to listen to the cctv rtsp streams.

    


    with opencv, I am very confused if I need to use opencv.js to be included in node script or to download and install opencv build into my os.

    


    with ffmpeg, Im not sure if I can access the mat object (mat as in opencv) to be passed to a video processing module (such as tensorflow for object detection). When I look and tested ffmpeg RTSP examples, I cant seem to find a way to store the video buffer frames into Mat variable for processing. The video is either played on ffplay or is saved into file system.

    


    I know this is a general questions, as I have not started coding yet.

    


    Appreciate for some help to guide me into the right tools and methods to achieve the objective. TQ.

    


  • Setting HLS segment times

    16 février 2016, par James Townsend

    I am passing a processed video from openCV to ffmpeg via a pipe here is the code

    ./OpenCV & \
    tail -n +0 -f out.avi  | ffmpeg -i pipe:0  -hls_time 1 -hls_list_size 0 -hls_wrap 10 -hls_segment_filename '%03d.ts' stream.m3u8

    My issue is the output .ts files are not in a uniformed duration they change from file to file.

    These are mostly long say 60 seconds. This means the connecting client has to wait for the first stream to finish before the playlist file (.m3u8) file is created. Therefor in this example they are 60 seconds or so behind live video and if the next .ts file is larger the streaming stops until this is finished. If the client tries to play before the next .ts file is created they are played the first .ts file.

    The frame rate from openCV is 1 frame per second.

    tail changes the output file of openCV called (out.avi) to a stdout.

    Any help would be great.