Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (44)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Utilisation et configuration du script

    19 janvier 2011, par

    Informations spécifiques à la distribution Debian
    Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
    Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
    Récupération du script
    Le script d’installation peut être récupéré de deux manières différentes.
    Via svn en utilisant la commande pour récupérer le code source à jour :
    svn co (...)

Sur d’autres sites (5796)

  • How can I develop Image recognition program

    18 juillet 2017, par SeongHyun Lee

    My program will recognize the advertisement between each TV Programs.
    But I don’t know how to recognize the ads.
    I had Sound Recognition in mind but It’s so difficult.
    I’m using FFmpeg Library.
    There is VideoState struct Reference.

    typedef struct VideoState {
    SDL_Thread *read_tid;
    SDL_Thread *video_tid;
    SDL_Thread *refresh_tid;
    AVInputFormat *iformat;
    int no_background;
    int abort_request;
    int force_refresh;
    int paused;
    int last_paused;
    int que_attachments_req;
    int seek_req;
    int seek_flags;
    int64_t seek_pos;
    int64_t seek_rel;
    int read_pause_return;
    AVFormatContext *ic;

    int audio_stream;

    int av_sync_type;
    double external_clock; /* external clock base */
    int64_t external_clock_time;

    double audio_clock;
    double audio_diff_cum; /* used for AV difference average computation */
    double audio_diff_avg_coef;
    double audio_diff_threshold;
    int audio_diff_avg_count;
    AVStream *audio_st;
    PacketQueue audioq;
    int audio_hw_buf_size;
    DECLARE_ALIGNED(16,uint8_t,audio_buf2)[AVCODEC_MAX_AUDIO_FRAME_SIZE * 4];
    uint8_t silence_buf[SDL_AUDIO_BUFFER_SIZE];
    uint8_t *audio_buf;
    uint8_t *audio_buf1;
    unsigned int audio_buf_size; /* in bytes */
    int audio_buf_index; /* in bytes */
    int audio_write_buf_size;
    AVPacket audio_pkt_temp;
    AVPacket audio_pkt;
    struct AudioParams audio_src;
    struct AudioParams audio_tgt;
    struct SwrContext *swr_ctx;
    double audio_current_pts;
    double audio_current_pts_drift;
    int frame_drops_early;
    int frame_drops_late;
    AVFrame *frame;

    enum ShowMode {
       SHOW_MODE_NONE = -1, SHOW_MODE_VIDEO = 0, SHOW_MODE_WAVES,
    SHOW_MODE_RDFT, SHOW_MODE_NB
    } show_mode;
    int16_t sample_array[SAMPLE_ARRAY_SIZE];
    int sample_array_index;
    int last_i_start;
    RDFTContext *rdft;
    int rdft_bits;
    FFTSample *rdft_data;
    int xpos;

    SDL_Thread *subtitle_tid;
    int subtitle_stream;
    int subtitle_stream_changed;
    AVStream *subtitle_st;
    PacketQueue subtitleq;
    SubPicture subpq[SUBPICTURE_QUEUE_SIZE];
    int subpq_size, subpq_rindex, subpq_windex;
    SDL_mutex *subpq_mutex;
    SDL_cond *subpq_cond;

    double frame_timer;
    double frame_last_pts;
    double frame_last_duration;
    double frame_last_dropped_pts;
    double frame_last_returned_time;
    double frame_last_filter_delay;
    int64_t frame_last_dropped_pos;
    double video_clock;                          ///< pts of last decoded frame
    / predicted pts of next decoded frame
    int video_stream;
    AVStream *video_st;
    PacketQueue videoq;
    double video_current_pts;                    ///< current displayed pts
    (different from video_clock if frame fifos are used)
    double video_current_pts_drift;              ///< video_current_pts - time
    (av_gettime) at which we updated video_current_pts - used to have running
    video pts
    int64_t video_current_pos;                   ///< current displayed file pos
    VideoPicture pictq[VIDEO_PICTURE_QUEUE_SIZE];
    int pictq_size, pictq_rindex, pictq_windex;
    SDL_mutex *pictq_mutex;
    SDL_cond *pictq_cond;
    #if !CONFIG_AVFILTER
    struct SwsContext *img_convert_ctx;
    #endif

    char filename[1024];
    int width, height, xleft, ytop;
    int step;

    #if CONFIG_AVFILTER
    AVFilterContext *in_video_filter;           ///< the first filter in the
    video chain
    AVFilterContext *out_video_filter;          ///< the last filter in the
    video chain
    int use_dr1;
    FrameBuffer *buffer_pool;
    #endif

    int refresh;
    int last_video_stream, last_audio_stream, last_subtitle_stream;

    SDL_cond *continue_read_thread;

    enum V_Show_Mode v_show_mode;
    } VideoState;

    What can I use for My Program.... I really need your help.. Thank you !!!

  • ffmpeg encoder streaming issues

    8 août 2017, par bobsingh1

    I am trying to build ffmpeg encoder on linux. I started with a custom built server Dual 1366 2.6 Ghz Xeon CPUs (6 cores) with 16 GB RAM with Ubuntu 16.04 minimal install. Built ffmpeg with h264 and aac. I am taking live source OTA channels and encoding/streaming them with following parameters

    -vcodec libx264 -preset superfast -crf 25 -x264opts keyint=60:min-keyint=60:scenecut=-1 -bufsize 7000k -b:v 6000k -maxrate 6300k -muxrate 6000k -s 1920x1080 -format yuv420p -g 60 -sn -c:a aac -b:a 384k -ar 44100

    And I am able to successfully udp out using mpegts. My problem starts with 5th stream. The server can handle four streams and as soon as I introduce 5th stream I start seeing hiccups in output. Looking at my cpu usage using top I still see only 65% to 75% usage with occasional 80% hit. Memory usage is well within acceptable parameters. So I am wondering either top is not giving me accurate cpu usage or something is not right with ffmpeg. The server is isolated for udp in/out on a 1 Gbps network.

    I decided to up the cpu power and installed two 3.5 Ghz CPUs (6 cores) thinking it was perhaps the cpu clock. To my surprise the results were no different. So now I am wondering is there some built in limit I am hitting when I process at 1080p. If I change the resolution to 720p it is able to process 8 streams but 720 is not acceptable.
    My target is 10 1080p streams per server.
    So my questions are
    1. If I use a quad motherboard and up the cpu count to 4 (6 or 8 cores) will I get 10 1080p streams ? Is there any theoretical max I can go with ffmpeg per machine ?
    2. Do cores matter more or does clock matter more ?
    3. Any suggestions in improvement with my options. I have tried ultrafast preset but the output quality is unacceptable.

    Thanks in advance

  • Do I have to reduce the GOP size for zero-latency streaming ?

    10 juin 2017, par AndreKR

    I am piping frames into FFmpeg at quite a slow rate (1 per second) and I want to stream them out with very low latency.

    There are not only sources (for example here and here) that don’t mention that I need to set the GOP size (keyint) to a small value, but there are even sources (like here and here) that explicitly say that I don’t have to set the GOP size to a small value.

    However, so far the only way I found to reduce the really long start delay is to actually reduce the GOP size to 1.

    Anyway, here’s my current command line :

    ffmpeg -f image2pipe
          -probesize 32
          -i -
          -c:v libx264
          -preset veryfast
          -crf 23
          -vsync 2
          -movflags "frag_keyframe+empty_moov"
          -profile baseline
          -x264-params "intra-refresh=1"
          -tune zerolatency
          -f mp4
          -

    (I also tried adding :bframes=0:force-ctr:no-mbtree:sync-lookahead=0:sliced-threads:rc-lookahead=0 to -x264-params (what -tune zerolatency is supposed to do) because some of those values didn’t appear in the debug output, but as expected it had no effect.)

    As you can see here, we are already 182 frames (= 3 minutes wall clock) into the stream, but it still hasn’t emitted anything (size was 1kB from the start).

    frame=  182 fps=1.0 q=20.0 size=       1kB time=00:00:07.24 bitrate=   0.8kbits/s speed=0.0402x

    This actually talks about the time-to-first-picture, but it makes it seem like it’s not a big deal. ;) It is for me, so maybe I have to make the first GOP 1 frame long and then I can switch to longer GOPs ? Can FFmpeg do that ?