Recherche avancée

Médias (39)

Mot : - Tags -/audio

Autres articles (23)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (4711)

  • How to create mpeg-dash mpd file with multiple periods ?

    14 septembre 2022, par Nimish Agrawal

    I wanted to join multiple mp4 files to create a mpd file. I wanted to have each of the individual mp4 files to be in different periods.

    


    I tried to do that using ffmpeg

    


    ffmpeg -i INPUT_FILE1.mp4 INPUT_FILE2.mp4 ...options... -f dash output.mpd


    


    But it is only considering first mpd file. Is it possible to do this using ffmpeg or any other tool ?

    


    ffmpeg -y -re -i big_buck_bunny_720p_1mb.mp4  -c:v libx264 -x264opts "keyint=24:min-keyint=24:no-scenecut" -r 24  -c:a aac -b:a 128k  -bf 1 -b_strategy 0 -sc_threshold 0 -pix_fmt yuv420p  -map 0:v:0 -map 0:a:0 -map 0:v:0 -map 0:a:0 -map 0:v:0 -map 0:a:0  -b:v:0 250k  -filter:v:0 "scale=-2:240" -profile:v:0 baseline  -b:v:1 750k  -filter:v:1 "scale=-2:480" -profile:v:1 main  -b:v:2 1500k -filter:v:2 "scale=-2:720" -profile:v:2 high  -use_timeline 1 -use_template 1 -window_size 5 -adaptation_sets "id=0,streams=v id=1,streams=a"  -f dash movie-dash\movie.mpd


    


  • Convert 2 channel mp4 to each mono wav file using FFMPEG or Python code

    30 mai 2024, par Harish Alwala

    I am new to audio files and its codecs.

    


    I would like to convert a 2 channel mp4 file to a single mono wav files.

    


    My understanding is a when I say 2 channel, it stores speech coming from each microphone in a separate channel. And when I split the channels to each individual mono wav files, I get speech of each microphone.

    


    My intension here is to get the speech from each channel and convert them to text. This way I can set the name of the speaker based on channel.

    


    I tried with ffmpeg and python code as well, unfortunately I get two files with same content.

    


    Looking at the following details
can someone construct ffmpeg command or python script to convert the 2 channel mp4 file to 2 individual mono wav files.

    


    FFprobe
ffprobe -i Two-Channel.mp4 -show_streams -select_streams a

    


    Result

    


    Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: isommp42
    encoder         : Google
  Duration: 00:52:42.19, start: 0.000000, bitrate: 421 kb/s
  Stream #0:0[0x1](und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 640x360 [SAR 1:1 DAR 16:9], 322 kb/s, 25 fps, 25 tbr, 12800 tbn (default)
      Metadata:
        handler_name    : ISO Media file produced by Google Inc.
        vendor_id       : [0][0][0][0]
  Stream #0:1[0x2](eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 96 kb/s (default)
      Metadata:
        handler_name    : ISO Media file produced by Google Inc.
        vendor_id       : [0][0][0][0]
[STREAM]
index=1
codec_name=aac
codec_long_name=AAC (Advanced Audio Coding)
profile=LC
codec_type=audio
codec_tag_string=mp4a
codec_tag=0x6134706d
sample_fmt=fltp
sample_rate=44100
channels=2
channel_layout=stereo
bits_per_sample=0
initial_padding=0
id=0x2
r_frame_rate=0/0
avg_frame_rate=0/0
time_base=1/44100
start_pts=0
start_time=0.000000
duration_ts=139452416
duration=3162.186304
bit_rate=96000
max_bit_rate=N/A
bits_per_raw_sample=N/A
nb_frames=136184
nb_read_frames=N/A
nb_read_packets=N/A
extradata_size=16
DISPOSITION:default=1
DISPOSITION:dub=0
DISPOSITION:original=0
DISPOSITION:comment=0
DISPOSITION:lyrics=0
DISPOSITION:karaoke=0
DISPOSITION:forced=0
DISPOSITION:hearing_impaired=0
DISPOSITION:visual_impaired=0
DISPOSITION:clean_effects=0
DISPOSITION:attached_pic=0
DISPOSITION:timed_thumbnails=0
DISPOSITION:non_diegetic=0
DISPOSITION:captions=0
DISPOSITION:descriptions=0
DISPOSITION:metadata=0
DISPOSITION:dependent=0
DISPOSITION:still_image=0
TAG:language=eng
TAG:handler_name=ISO Media file produced by Google Inc.
TAG:vendor_id=[0][0][0][0]
[/STREAM] 


    


    FFmpeg command

    


    ffmpeg -i Two-Channel.mp4 -filter_complex "pan=mono|c0=0c0" left_channel.wav

    


    python code
using FFPMEG I converted mp4 to wav and then tried below code
enter image description here
enter image description here

    


  • Get PTS from raw H264 mdat generated by iOS AVAssetWriter

    26 décembre 2012, par kolyuchiy

    I'm trying to simultaneously read and write H.264 mov file written by AVAssetWriter. I managed to extract individual NAL units, pack them into ffmpeg's AVPackets and write them into another video format using ffmpeg. It works and the resulting file plays well except the playback speed is not right. How do I calculate the correct PTS/DTS values from raw H.264 data ? Or maybe there exists some other way to get them ?

    Here's what I've tried :

    1. Limit capture min/max frame rate to 30 and assume that the output file will be 30 fps. In fact its fps is always less than values that I set. And also, I think the fps is not constant from packet to packet.

    2. Remember each written sample's presentation timestamp and assume that samples map one-to-one to NALUs and apply saved timestamp to output packet. This doesn't work.

    3. Setting PTS to 0 or AV_NOPTS_VALUE. Doesn't work.

    From googling about it I understand that raw H.264 data usually doesn't contain any timing info. It can sometimes have some timing info inside SEI, but the files that I use don't have it. On the other hand, there are some applications that do exactly what I'm trying to do, so I suppose it is possible somehow.