Recherche avancée

Médias (0)

Mot : - Tags -/médias

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (55)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (13622)

  • How can this YUV420 video have uneven dimensions

    22 décembre 2020, par Xelpi

    Just trying to cement my understanding of video codecs and can't seem to find any information about this.

    


    https://www.mediafire.com/file/cbx8sciq5mie94m/arrow.webm/file contains a tiny vp9 webm. This was generated by me via ffmpeg by doing a simple gif -> webm conversion.

    


    The source gif had a 13x11 resolution. Somehow, the output video also has a 13x11 resolution. I'm trying to understand how that is possible.

    


    As far as I understand :

    


      

    1. The YUV420 pixel format would make this impossible due to the chroma subsampling factor of 2 forcing a divisibility by two requirement.

      


    2. 


    3. VP9 itself has a minimum block size of 16x16(?) so at least that much data must be encoded(?)

      


    4. 


    


    Consequently, it's my assumption we have either a 14x12 or 16x16 video stream encoded here that is being somehow scaled or cropped down to 13x11.

    


    The problem is I can't find any explanation as to how this is working.

    


    Here's the ffprobe output for the stream :

    


    [STREAM]
index=0
codec_name=vp9
codec_long_name=Google VP9
profile=Profile 0
codec_type=video
codec_time_base=1/60
codec_tag_string=[0][0][0][0]
codec_tag=0x0000
width=13
height=11
coded_width=13
coded_height=11
closed_captions=0
has_b_frames=0
sample_aspect_ratio=1:1
display_aspect_ratio=13:11
pix_fmt=yuv420p
level=-99
color_range=tv
color_space=unknown
color_transfer=unknown
color_primaries=unknown
chroma_location=unspecified
field_order=unknown
timecode=N/A
refs=1
id=N/A
r_frame_rate=60/1
avg_frame_rate=60/1
time_base=1/1000
start_pts=0
start_time=0.000000
duration_ts=N/A
duration=N/A
bit_rate=N/A
max_bit_rate=N/A
bits_per_raw_sample=N/A
nb_frames=N/A
nb_read_frames=N/A
nb_read_packets=N/A
DISPOSITION:default=1
DISPOSITION:dub=0
DISPOSITION:original=0
DISPOSITION:comment=0
DISPOSITION:lyrics=0
DISPOSITION:karaoke=0
DISPOSITION:forced=0
DISPOSITION:hearing_impaired=0
DISPOSITION:visual_impaired=0
DISPOSITION:clean_effects=0
DISPOSITION:attached_pic=0
DISPOSITION:timed_thumbnails=0
TAG:alpha_mode=1
TAG:ENCODER=Lavc58.91.100 libvpx-vp9
TAG:DURATION=00:00:00.600000000
[/STREAM]


    


    and for the last frame :

    


    [FRAME]
media_type=video
stream_index=0
key_frame=0
pkt_pts=583
pkt_pts_time=0.583000
pkt_dts=583
pkt_dts_time=0.583000
best_effort_timestamp=583
best_effort_timestamp_time=0.583000
pkt_duration=16
pkt_duration_time=0.016000
pkt_pos=3639
pkt_size=15
width=13
height=11
pix_fmt=yuv420p
sample_aspect_ratio=1:1
pict_type=P
coded_picture_number=0
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
color_range=tv
color_space=unknown
color_primaries=unknown
color_transfer=unknown
chroma_location=unspecified
[/FRAME]


    


    going off the coded_width and coded_height values (supposed to represent the "true" width/height before any scaling(?)) plus sar value of 1, as far as I can tell this is genuinely a 13x11 video stream, but that should be impossible no ?

    


    My question is, why is this a valid video file ?

    


    If I try to e.g. zscale something to an uneven resolution in YUV420 pixel format I hit the expected chroma subsampling errors.

    


  • avformat/mvi : Use 64bit for testing dimensions

    16 janvier 2021, par Michael Niedermayer
    avformat/mvi : Use 64bit for testing dimensions
    

    Fixes : signed integer overflow : 65535 * 65535 cannot be represented in type 'int'
    Fixes : 26910/clusterfuzz-testcase-minimized-ffmpeg_dem_MVI_fuzzer-6649291124899840

    Found-by : continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
    Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>

    • [DH] libavformat/mvi.c
  • I'm attempting to retrieve the dimensions of in-memory video buffer objects in Node.js without writing to disk

    23 mars 2021, par undefined

    All right, so I have a Node.js server where media files can be uploaded by the user. It's important for displaying the media in the client later on that I can retrieve and store the width and height dimensions of the video in advance.

    &#xA;

    For performance reasons, disk space limitations, and a few other reasons, I'm attempting to do so without saving the video buffer object (retrieved via Multer) to disk as it has terrible performance on the server I am using.

    &#xA;

    I have FFmpeg and ffprobe, as well as the nom get-video-dimensions module, but I can't find a way to get media statistics without writing the file. For example, get-video-dimensions only allows you to enter a file path.

    &#xA;

    Is there a way to either feed the buffer into one of these utilities using either a stream/pipe to simulate the source coming from disk, or is there an rpm module I've overlooked that could achieve this task ?

    &#xA;

    if (imageBufferObject.media_type == "video") {&#xA;  // Get resolution&#xA;&#xA;  // Save to disk&#xA;  let write_response = await writeFile(imageBufferObject)&#xA;  // Use utility&#xA;  let dim = await dimensions(path.join(__dirname, &#x27;tmp&#x27;, newName))&#xA;  // Delete file&#xA;  let delete_response = await deleteFile(imageBufferObject)&#xA;&#xA;&#xA;   async function writeFile(file){&#xA;     return new Promise((resolve, reject)=>{&#xA;       fs.writeFile(path.join(__dirname, &#x27;tmp&#x27;, file.newName), file.buffer, (err)=>{&#xA;         resolve(200)&#xA;        })&#xA;     })&#xA;   }&#xA;   async function deleteFile(file){&#xA;     return new Promise((resolve, reject)=>{&#xA;       fs.unlink(path.join(__dirname, &#x27;tmp&#x27;, file.newName), (err)=>{&#xA;         resolve(200)&#xA;       })&#xA;     })&#xA;   }&#xA;

    &#xA;

    I desperately want to avoid using the hard disk !

    &#xA;