
Recherche avancée
Autres articles (20)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (3800)
-
How can this YUV420 video have uneven dimensions
22 décembre 2020, par XelpiJust trying to cement my understanding of video codecs and can't seem to find any information about this.


https://www.mediafire.com/file/cbx8sciq5mie94m/arrow.webm/file contains a tiny vp9 webm. This was generated by me via
ffmpeg
by doing a simple gif -> webm conversion.

The source gif had a
13x11
resolution. Somehow, the output video also has a13x11
resolution. I'm trying to understand how that is possible.

As far as I understand :


- 

-
The YUV420 pixel format would make this impossible due to the chroma subsampling factor of 2 forcing a divisibility by two requirement.


-
VP9 itself has a minimum block size of 16x16(?) so at least that much data must be encoded(?)








Consequently, it's my assumption we have either a 14x12 or 16x16 video stream encoded here that is being somehow scaled or cropped down to 13x11.


The problem is I can't find any explanation as to how this is working.


Here's the ffprobe output for the stream :


[STREAM]
index=0
codec_name=vp9
codec_long_name=Google VP9
profile=Profile 0
codec_type=video
codec_time_base=1/60
codec_tag_string=[0][0][0][0]
codec_tag=0x0000
width=13
height=11
coded_width=13
coded_height=11
closed_captions=0
has_b_frames=0
sample_aspect_ratio=1:1
display_aspect_ratio=13:11
pix_fmt=yuv420p
level=-99
color_range=tv
color_space=unknown
color_transfer=unknown
color_primaries=unknown
chroma_location=unspecified
field_order=unknown
timecode=N/A
refs=1
id=N/A
r_frame_rate=60/1
avg_frame_rate=60/1
time_base=1/1000
start_pts=0
start_time=0.000000
duration_ts=N/A
duration=N/A
bit_rate=N/A
max_bit_rate=N/A
bits_per_raw_sample=N/A
nb_frames=N/A
nb_read_frames=N/A
nb_read_packets=N/A
DISPOSITION:default=1
DISPOSITION:dub=0
DISPOSITION:original=0
DISPOSITION:comment=0
DISPOSITION:lyrics=0
DISPOSITION:karaoke=0
DISPOSITION:forced=0
DISPOSITION:hearing_impaired=0
DISPOSITION:visual_impaired=0
DISPOSITION:clean_effects=0
DISPOSITION:attached_pic=0
DISPOSITION:timed_thumbnails=0
TAG:alpha_mode=1
TAG:ENCODER=Lavc58.91.100 libvpx-vp9
TAG:DURATION=00:00:00.600000000
[/STREAM]



and for the last frame :


[FRAME]
media_type=video
stream_index=0
key_frame=0
pkt_pts=583
pkt_pts_time=0.583000
pkt_dts=583
pkt_dts_time=0.583000
best_effort_timestamp=583
best_effort_timestamp_time=0.583000
pkt_duration=16
pkt_duration_time=0.016000
pkt_pos=3639
pkt_size=15
width=13
height=11
pix_fmt=yuv420p
sample_aspect_ratio=1:1
pict_type=P
coded_picture_number=0
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
color_range=tv
color_space=unknown
color_primaries=unknown
color_transfer=unknown
chroma_location=unspecified
[/FRAME]



going off the
coded_width
andcoded_height
values (supposed to represent the "true" width/height before any scaling(?)) plussar
value of 1, as far as I can tell this is genuinely a 13x11 video stream, but that should be impossible no ?

My question is, why is this a valid video file ?


If I try to e.g.
zscale
something to an uneven resolution inYUV420
pixel format I hit the expected chroma subsampling errors.

-
-
FFmpeg and QuickTime Player dimensions of video do not match [duplicate]
31 mai 2020, par GRSI have a
video.mp4
, which I scaled using FFmpeg to getout.mp4
. The new video has the following probe :


{'streams': [{'index': 0,
 'codec_name': 'h264',
 'codec_long_name': 'H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10',
 'profile': 'High',
 'codec_type': 'video',
 'codec_time_base': '1/50',
 'codec_tag_string': 'avc1',
 'codec_tag': '0x31637661',
 'width': 886,
 'height': 1920,
 'coded_width': 896,
 'coded_height': 1920,
 'has_b_frames': 2,
 'sample_aspect_ratio': '5120:1329',
 'display_aspect_ratio': '16:9',
 'pix_fmt': 'yuv420p',
 'level': 40,
 'chroma_location': 'left',
 'refs': 1,
 'is_avc': 'true',
 'nal_length_size': '4',
 'r_frame_rate': '25/1',
 'avg_frame_rate': '25/1',
 'time_base': '1/12800',
 'start_pts': 0,
 'start_time': '0.000000',
 'duration_ts': 384512,
 'duration': '30.040000',
 'bit_rate': '490832',
 'bits_per_raw_sample': '8',
 'nb_frames': '751',
 'disposition': {'default': 1,
 'dub': 0,
 'original': 0,
 'comment': 0,
 'lyrics': 0,
 'karaoke': 0,
 'forced': 0,
 'hearing_impaired': 0,
 'visual_impaired': 0,
 'clean_effects': 0,
 'attached_pic': 0,
 'timed_thumbnails': 0},
 'tags': {'language': 'und', 'handler_name': 'VideoHandler'}}],
 'format': {'filename': 'out.mp4',
 'nb_streams': 1,
 'nb_programs': 0,
 'format_name': 'mov,mp4,m4a,3gp,3g2,mj2',
 'format_long_name': 'QuickTime / MOV',
 'start_time': '0.000000',
 'duration': '30.040000',
 'size': '1852948',
 'bit_rate': '493461',
 'probe_score': 100,
 'tags': {'major_brand': 'isom',
 'minor_version': '512',
 'compatible_brands': 'isomiso2avc1mp41',
 'encoder': 'Lavf58.20.100'}}}




I am expecting my video player to say the video is 886 x 1920, however, it is 3413:1920.




What could be the error ? I am using
pix_frmt=yuva420p
andcodec=libx264
to create the video.


The error is that my ascpect ratio remains 16:9, which is the original input video, when it should automatically change to 9:19. E.g.
3413 = 1920 * 16 / 9
. So why isn't the ascpect ratio changed when the video is scaled ?

-
FFMPEG crop a portrait image square with respect to the zoomed dimensions and x/y pan
3 octobre 2022, par huggerI am making a photo crop component for my mobile app.


If an image is selected from the image picker and it is above 4:5 ratio, using FFMPEG I need to crop this image 1:1 using the dynamic X/Y values along with the scale value from pinching (1.00 - 10).


I trying to use these values with the FFMPEG crop/scale filter, but no matter what I try I cant seem to get the crop to work as expected... It is not matching up from the UI pan / zoom.


My FFMPEG command looks like this, along with some other relevant variable code :


let zoom = this._scale.__getValue(); //set dynamically by the user; ranges from 0.1 to 10
let translateY = this._translateX.__getValue(); //amount from left side
let translateX = this._translateY.__getValue(); // amount from top

//trying to first crop the width and height / the zoom for the zoom scaling..?
//then I am using translate X/Y to get my coordinates (not matching up)
//I tried to use translateXY * zoom to get the scaling factor but it still did not work...
//the image needs to be square, chain scale filter to make this happen after the original crop?)
FFmpegKit.execute(
 `-y -i ${this.state.mediaSource} -vf "crop=iw/${zoom}:ih/${zoom}:${translateX}:${translateY}, scale=iw:iw:0:0" -qscale 0 -frames:v 1 ${filterPathPostCrop}`
).then(async (session) => {
 const returnCode = await session.getReturnCode();
 if (ReturnCode.isSuccess(returnCode)) {
 this.setState({
 mediaSource: filterPathPostCrop,
 videoSourcePreview: `${filterPathPostCrop}?${new Date().getTime()}`,
 ffMPEGinProgress: null,
 aspectRatio: 1080 / 1080,
 videoTime: 0,
 isPlayingVideo: false,
 isCropping: false,
 filterOutputIsAlt: !this.state.filterOutputIsAlt,
 wasCropped: true,
 });
 } else if (ReturnCode.isCancel(returnCode)) {
 // CANCEL
 } else {
 // ERROR
 alert('error');
 }
});



I appreciate any guidance I can get with this, I feel like I am close I just cant seem to get this calculation working...