Recherche avancée

Médias (1)

Mot : - Tags -/3GS

Autres articles (102)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (9561)

  • avcodec/pnm : avoid mirroring PFM images vertically

    16 novembre 2022, par Leo Izen
    avcodec/pnm : avoid mirroring PFM images vertically
    

    PFM (aka Portable FloatMap) encodes its scanlines from bottom-to-top,
    not from top-to-bottom, unlike other NetPBM formats. Without this
    patch, FFmpeg ignores this exception and decodes/encodes PFM images
    mirrored vertically from their proper orientation.

    For reference, see the NetPBM tool pfmtopam, which encodes a .pam
    from a .pfm, using the correct orientation (and which FFmpeg reads
    correctly). Also compare ffplay to magick display, which shows the
    correct orientation as well.

    See : http://www.pauldebevec.com/Research/HDR/PFM/ and see :
    https://netpbm.sourceforge.net/doc/pfm.html for descriptions of this
    image format.

    Signed-off-by : Leo Izen <leo.izen@gmail.com>
    Reviewed-by : Anton Khirnov <anton@khirnov.net>
    Signed-off-by : James Almer <jamrial@gmail.com>

    • [DH] libavcodec/pnmdec.c
    • [DH] libavcodec/pnmenc.c
    • [DH] tests/ref/lavf/gbrpf32be.pfm
    • [DH] tests/ref/lavf/gbrpf32le.pfm
    • [DH] tests/ref/lavf/grayf32be.pfm
    • [DH] tests/ref/lavf/grayf32le.pfm
  • Programmatically accessing PTS times in MP4 container

    9 novembre 2022, par mcandril

    Background

    &#xA;

    For a research project, we are recording video data from two cameras and feed a synchronization pulse directly into the microphone ADC every second.

    &#xA;

    Problem

    &#xA;

    We want to derive a frame time stamp in the clock of the pulse source for each camera frame to relate the camera images temporally. With our current methods (see below), we get a frame offset of around 2 frames between the cameras. Unfortunately, inspection of the video shows that we are clearly 6 frames off (at least at one point) between the cameras.&#xA;I assume that this is because we are relating audio and video signal wrong (see below).

    &#xA;

    Approach I think I need help with

    &#xA;

    I read that in the MP4 container, there should be PTS times for video and audio. How do we access those programmatically. Python would be perfect, but if we have to call ffmpeg via system calls, we may do that too ...

    &#xA;

    What we currently fail with

    &#xA;

    The original idea was to find video and audio times as

    &#xA;

    audio_sample_times = range(N_audiosamples)/audio_sampling_rate&#xA;video_frame_times = range(N_videoframes)/video_frame_rate&#xA;

    &#xA;

    then identify audio_pulse_times in audio_sample_times base, calculate the relative position of each video_time to the audio_pulse_times around it, and select the same relative value to the corresponding source_pulse_times.

    &#xA;

    However, a first indication that this approach is problematic is already that for some videos, N_audiosamples/audio_sampling_rate differs from N_videoframes/video_frame_rate by multiple frames.

    &#xA;

    What I have found by now

    &#xA;

    OpenCV's cv2.CAP_PROP_POS_MSEC seems to do exactly what we do, and not access any PTS ...

    &#xA;

    Edit : What I took from the winning answer

    &#xA;

    container = av.open(video_path)&#xA;signal = []&#xA;audio_sample_times = []&#xA;video_sample_times = []&#xA;&#xA;for frame in tqdm(container.decode(video=0, audio=0)):&#xA;    if isinstance(frame, av.audio.frame.AudioFrame):&#xA;        sample_times = (frame.pts &#x2B; np.arange(frame.samples)) / frame.sample_rate&#xA;        audio_sample_times &#x2B;= list(sample_times)&#xA;        signal_f_ch0 = frame.to_ndarray().reshape((-1, len(frame.layout.channels))).T[0]&#xA;        signal &#x2B;= list(signal_f_ch0)&#xA;    elif isinstance(frame, av.video.frame.VideoFrame):&#xA;        video_sample_times.append(float(frame.pts*frame.time_base))&#xA;&#xA;signal = np.abs(np.array(signal))&#xA;audio_sample_times = np.array(audio_sample_times)&#xA;video_sample_times = np.array(video_sample_times)&#xA;

    &#xA;

    Unfortunately, in my particular case, all pts are consecutive and gapless, so the result is the same as with the naive solution ...&#xA;By picture clues, we identified a section of 10s in the videos, somewhere in which they desync, but can't find any traces of that in the data.

    &#xA;

  • FFMPEG av_interleaved_write_frame() : Operation not permitted

    21 décembre 2020, par camslaz

    Ok I receiving a 'av_interleaved_write_frame() : Operation not permitted' error while trying to encode an MOV file. Firstly I need to outline the conditions behind it.

    &#xA;&#xA;

    I am encoding 12 different files of different resolution sizes and format types via a PHP script that runs on cron. Basically it grabs a 250mb HD MOV file and encodes it in 4 different frame sizes as MOV, MP4 and WMV file types.

    &#xA;&#xA;

    Now the script takes over 10mins to run and encode each of the files for the 250mb input file. I am outputting the processing times and as soon as the time on the script hits 10mins FFMPEG crashes and returns "av_interleaved_write_frame() : Operation not permitted" for the current file being encoded and all other remaining files yet to be encoded.

    &#xA;&#xA;

    If the input videos is 150MB the total time the script runs for is under 10mins so it encodes all of the videos fine. Additionally if I run the FFMPEG command on the individual file that it fails on for the 250mb file it encodes the file with no issues.

    &#xA;&#xA;

    From doing to research on the error "av_interleaved_write_frame()" it seems it is related to timestamps of what I understand to be of the input file. But in saying that it doesn't seem to be the case in my instance because I can encode the file with no problem if I do it individually.

    &#xA;&#xA;

    example ffmpeg command

    &#xA;&#xA;

    ffmpeg -i GVowbt3vsrXL.mov -s 1920x1080 -sameq -vf "unsharp" -y GVowbt3vsrXL_4.wmv&#xA;

    &#xA;&#xA;

    Error output on the failed file at 10mins. Remember there is no issue with the command if I run it by itself it is only when the script hits 10mins.

    &#xA;&#xA;

    &#x27;output&#x27; =>&#xA;     array (&#xA;       0 => &#x27;FFmpeg version SVN-r24545, Copyright (c) 2000-2010 the FFmpeg developers&#x27;,&#xA;       1 => &#x27;  built on Aug 20 2010 23:32:02 with gcc 4.1.2 20080704 (Red Hat 4.1.2-48)&#x27;,&#xA;       2 => &#x27;  configuration: --enable-shared --enable-gpl --enable-pthreads --enable-nonfree --cpu=opteron --extra-cflags=\&#x27;-O3 -march=opteron -mtune=opteron\&#x27; --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-avfilter --enable-filter=movie --enable-avfilter-lavf --enable-swscale&#x27;,&#xA;       3 => &#x27;  libavutil     50.23. 0 / 50.23. 0&#x27;,&#xA;       4 => &#x27;  libavcore      0. 1. 0 /  0. 1. 0&#x27;,&#xA;       5 => &#x27;  libavcodec    52.84. 1 / 52.84. 1&#x27;,&#xA;       6 => &#x27;  libavformat   52.77. 0 / 52.77. 0&#x27;,&#xA;       7 => &#x27;  libavdevice   52. 2. 0 / 52. 2. 0&#x27;,&#xA;       8 => &#x27;  libavfilter    1.26. 1 /  1.26. 1&#x27;,&#xA;       9 => &#x27;  libswscale     0.11. 0 /  0.11. 0&#x27;,&#xA;       10 => &#x27;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from \&#x27;/home/hdfootage/public_html/process/VideoEncode/_tmpfiles/GVowbt3vsrXL/GVowbt3vsrXL.mov\&#x27;:&#x27;,&#xA;       11 => &#x27;  Metadata:&#x27;,&#xA;       12 => &#x27;    major_brand     : qt&#x27;,&#xA;       13 => &#x27;    minor_version   : 537199360&#x27;,&#xA;       14 => &#x27;    compatible_brands: qt&#x27;,&#xA;       15 => &#x27;  Duration: 00:00:20.00, start: 0.000000, bitrate: 110802 kb/s&#x27;,&#xA;       16 => &#x27;    Stream #0.0(eng): Video: mjpeg, yuvj422p, 1920x1080 [PAR 72:72 DAR 16:9], 109386 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc&#x27;,&#xA;       17 => &#x27;    Stream #0.1(eng): Audio: pcm_s16be, 44100 Hz, 2 channels, s16, 1411 kb/s&#x27;,&#xA;       18 => &#x27;[buffer @ 0xdcd0e0] w:1920 h:1080 pixfmt:yuvj422p&#x27;,&#xA;       19 => &#x27;[unsharp @ 0xe00280] auto-inserting filter \&#x27;auto-inserted scaler 0\&#x27; between the filter \&#x27;src\&#x27; and the filter \&#x27;Filter 0 unsharp\&#x27;&#x27;,&#xA;       20 => &#x27;[scale @ 0xe005b0] w:1920 h:1080 fmt:yuvj422p -> w:1920 h:1080 fmt:yuv420p flags:0xa0000004&#x27;,&#xA;       21 => &#x27;[unsharp @ 0xe00280] effect:sharpen type:luma msize_x:5 msize_y:5 amount:1.00&#x27;,&#xA;       22 => &#x27;[unsharp @ 0xe00280] effect:none type:chroma msize_x:0 msize_y:0 amount:0.00&#x27;,&#xA;       23 => &#x27;Output #0, asf, to \&#x27;/home/hdfootage/public_html/process/VideoEncode/_tmpfiles/GVowbt3vsrXL/GVowbt3vsrXL_4.wmv\&#x27;:&#x27;,&#xA;       24 => &#x27;  Metadata:&#x27;,&#xA;       25 => &#x27;    WM/EncodingSettings: Lavf52.77.0&#x27;,&#xA;       26 => &#x27;    Stream #0.0(eng): Video: msmpeg4, yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 1k tbn, 25 tbc&#x27;,&#xA;       27 => &#x27;    Stream #0.1(eng): Audio: libmp3lame, 44100 Hz, 2 channels, s16, 64 kb/s&#x27;,&#xA;       28 => &#x27;Stream mapping:&#x27;,&#xA;       29 => &#x27;  Stream #0.0 -> #0.0&#x27;,&#xA;       30 => &#x27;  Stream #0.1 -> #0.1&#x27;,&#xA;       31 => &#x27;Press [q] to stop encoding&#x27;,&#xA;       32 => &#x27;[msmpeg4 @ 0xdccb50] warning, clipping 1 dct coefficients to -127..127&#x27;,&#xA;

    &#xA;&#xA;

    Then it errors

    &#xA;&#xA;

    frame=   75 fps=  5 q=1.0 size=   12704kB time=2.90 bitrate=3588 6.0kbits av_interleaved_write_frame(): Operation not permitted&#x27;,&#xA;     )&#xA;

    &#xA;&#xA;

    Has any anybody encountered this sort of problem before ? It seems to be something to do with the timestamps but only because the script is running for a period longer then 10mins. It maybe related to PHP/Apache config but I don't know if it is FFMPEG or if it is server config I need to be looking at.

    &#xA;