
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (65)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (9148)
-
ffmpeg command to convert 720p to 1080p, 1440p, 2160p compatible with YouTube [closed]
24 juin 2024, par Tendekai MuchenjeI have a video that I created in Final Cut Pro. When i uploaded it to Youtube, it only maxes out at 720p. I would like it to have higher options like 1080, 1440 and 2160 even if the quality does not change. Maybe 2160 is impossible, but 1440p would be nice. I have tried this command, but the resulting file is unplayable, it only plays audio.


ffmpeg -i Hey\ Girl.mov -vf scale=3840x2160:flags=lanczos -c:v libx264 -crf 13 -c:a aac -b:a 512k -preset slow hey_girl_hd.mov



I am looking for a command that would make at least 1080 and 1440 work on YouTube. If 2160 can work, that would be great too. If it matters, here is all the info that ffprobe throws out about the file


{
 "streams": [
 {
 "index": 0,
 "codec_name": "pcm_s24le",
 "codec_long_name": "PCM signed 24-bit little-endian",
 "codec_type": "audio",
 "codec_tag_string": "lpcm",
 "codec_tag": "0x6d63706c",
 "sample_fmt": "s32",
 "sample_rate": "48000",
 "channels": 2,
 "bits_per_sample": 24,
 "r_frame_rate": "0/0",
 "avg_frame_rate": "0/0",
 "time_base": "1/48000",
 "start_pts": 0,
 "start_time": "0.000000",
 "duration_ts": 8987200,
 "duration": "187.233333",
 "bit_rate": "2304000",
 "bits_per_raw_sample": "24",
 "nb_frames": "8987200",
 "disposition": {
 "default": 1,
 "dub": 0,
 "original": 0,
 "comment": 0,
 "lyrics": 0,
 "karaoke": 0,
 "forced": 0,
 "hearing_impaired": 0,
 "visual_impaired": 0,
 "clean_effects": 0,
 "attached_pic": 0,
 "timed_thumbnails": 0
 },
 "tags": {
 "creation_time": "2024-04-15T23:26:08.000000Z",
 "language": "und",
 "handler_name": "Core Media Audio",
 "vendor_id": "[0][0][0][0]"
 }
 },
 {
 "index": 1,
 "codec_name": "prores",
 "codec_long_name": "Apple ProRes (iCodec Pro)",
 "profile": "Standard",
 "codec_type": "video",
 "codec_tag_string": "apcn",
 "codec_tag": "0x6e637061",
 "width": 1280,
 "height": 720,
 "coded_width": 1280,
 "coded_height": 720,
 "closed_captions": 0,
 "has_b_frames": 0,
 "sample_aspect_ratio": "1:1",
 "display_aspect_ratio": "16:9",
 "pix_fmt": "yuv422p10le",
 "level": -99,
 "color_range": "tv",
 "color_space": "bt709",
 "color_transfer": "bt709",
 "color_primaries": "bt709",
 "field_order": "progressive",
 "refs": 1,
 "r_frame_rate": "60/1",
 "avg_frame_rate": "60/1",
 "time_base": "1/6000",
 "start_pts": 0,
 "start_time": "0.000000",
 "duration_ts": 1123400,
 "duration": "187.233333",
 "bit_rate": "138655195",
 "bits_per_raw_sample": "10",
 "nb_frames": "11234",
 "disposition": {
 "default": 1,
 "dub": 0,
 "original": 0,
 "comment": 0,
 "lyrics": 0,
 "karaoke": 0,
 "forced": 0,
 "hearing_impaired": 0,
 "visual_impaired": 0,
 "clean_effects": 0,
 "attached_pic": 0,
 "timed_thumbnails": 0
 },
 "tags": {
 "creation_time": "2024-04-15T23:26:08.000000Z",
 "language": "und",
 "handler_name": "Core Media Video",
 "vendor_id": "[0][0][0][0]",
 "encoder": "Apple ProRes 422",
 "timecode": "00:00:00:00"
 }
 },
 {
 "index": 2,
 "codec_type": "data",
 "codec_tag_string": "tmcd",
 "codec_tag": "0x64636d74",
 "r_frame_rate": "0/0",
 "avg_frame_rate": "6000/100",
 "time_base": "1/6000",
 "start_pts": 0,
 "start_time": "0.000000",
 "duration_ts": 1123400,
 "duration": "187.233333",
 "nb_frames": "1",
 "disposition": {
 "default": 1,
 "dub": 0,
 "original": 0,
 "comment": 0,
 "lyrics": 0,
 "karaoke": 0,
 "forced": 0,
 "hearing_impaired": 0,
 "visual_impaired": 0,
 "clean_effects": 0,
 "attached_pic": 0,
 "timed_thumbnails": 0
 },
 "tags": {
 "creation_time": "2024-04-15T23:26:08.000000Z",
 "language": "und",
 "handler_name": "Core Media Time Code",
 "timecode": "00:00:00:00"
 }
 }
 ],
 "format": {
 "filename": "Hey Girl.mov",
 "nb_streams": 3,
 "nb_programs": 0,
 "format_name": "mov,mp4,m4a,3gp,3g2,mj2",
 "format_long_name": "QuickTime / MOV",
 "start_time": "0.000000",
 "duration": "187.233333",
 "size": "3306875809",
 "bit_rate": "141294320",
 "probe_score": 100,
 "tags": {
 "major_brand": "qt ",
 "minor_version": "0",
 "compatible_brands": "qt ",
 "creation_time": "2024-04-15T23:26:08.000000Z",
 "com.apple.quicktime.keywords": "Hey GIRL",
 "com.apple.quicktime.description": "This video is about Hey Girl",
 "com.apple.quicktime.author": "Ja Mo",
 "com.apple.quicktime.displayname": "Hey Girl",
 "com.apple.quicktime.title": "Hey Girl"
 }
 }
}



-
ffmpeg h264 to mp4 conversion from multiple files fails to preserve in-sequence resolution changes
1er juillet 2023, par LB2This will be a long post, so I thank you in advance for your patience in digesting it.


Context


I have different sources that generate visual content that eventually need to be all composed into a single .mp4 file. The sources are :


- 

- H.264 video (encoded using CUDA NVENC).

- 

- This video can have in-sequence resolution change that is natively supported by H.264 codec.
- I.e. stream may start as HxW resolution and mid-stream change to WxH. This behavior happens because it comes from a camera device that can be rotated and flipped between portrait and landscape (e.g. think of a phone camera recording video and phone being flipped from one orientation to another, and video recording adjusting its encoding for proper video scaling and orientation).
- When rotation occurs, most of the time H & W are just swaps, but may actually be entirely new values — e.g. in some cases 1024x768 will switch to 768x1024, but in other cases 1024x768 may become 460x640 (depends on source camera capabilities that I have no control over).








- JPEGs. A series (a.k.a. batch) of still JPEGs.

- 

- The native resolution of JPEGs may or may not match the video resolution in the earlier bullet.
- JPEGs can also reflect rotation of device and so some JPEGs in a sequence may start at HxW resolution and then from some arbitrary JPEG file can flip and become WxH. Similar to video, resolution dimensions are likely to be just a swap, but may become altogether different values.






- There can be any number of batches and intermixes between video and still sources. E.g. V1 + S2 + S3 + V4 + V5 + V6 + S7 + ...
- There can be any number of resolution changes between or within batches. e.g. V1 ;r1 + V1 ;r2 + S2 ;r1 + S2 ;r3 + V3 ;r2 + ... (where first subscript is batch sequence ; rX is resolution)










Problem


I'm attempting to do this conversion with
ffmpeg
and can't quite get it right. The problem is that I can't get output to respect source resolutions, and it just squishes all into a single output resolution.



As already mentioned above, H.264 supports resolution changes in-sequence (mid-stream), and it should be possible to convert and concatenate all the content and have final output contain in-sequence resolution changes.


Since MP4 is just a container, I'm assuming that MP4 files can do so as well ?


Attempts so far


The approach thus far has been to take each batch of content (i.e. .h264 video or a set of JPEGs), and individually convert to .mp4. Video is converted using
-c copy
to ensure it doesn't try to transcode, e.g. :

ffmpeg -hide_banner -i videoX.h264 -c copy -vsync vfr -video_track_timescale 90000 intermediateX.mp4



... and JPEGs are converted using
-f concat


ffmpeg -hide_banner -f concat -safe 0 -i jpegsX.txt -vf 'scale=trunc(iw/2)*2:trunc(ih/2)*2' -r 30 -vsync vfr -video_track_timescale 90000 intermediateX.mp4



... and then all the intermediates concatenated together


ffmpeg -hide_banner -f concat -safe 0 -i final.txt -pix_fmt yuv420p -c copy -vsync vfr -video_track_timescale 90000 -metadata title='yabadabadoo' -fflags +bitexact -flags:v +bitexact -flags:a +bitexact final.mp4



This concatenates, but if resolution changes at some mid point, then that part of content comes up squished/stretched in final output.


Use h.264 as intermediates


All the intermediates are produced the same, except as .h264. All intermediate .h264 are
cat
'ed together like `cat intermediate1.h264 intermediate2.264 > final.h264.

If final output is
final.mp4
, the output is incorrect and images are squished/stretched.

If
final.h264
, then at least it seems to be respecting aspect ratios of input and managing to produce correctly looking output. However, examining withffprobe
it seems that it uses SAR weird ratios, where first frames arewidth=1440 height=3040 sample_aspect_ratio=1:1
, but later SAR takes on values likewidth=176 height=340 sample_aspect_ratio=1545:176
, which I suspect isn't right, since all original input was with "square pixels". I think the reason for it is that it was composed out of different sized JPEGs, and concat filter somehow caused ffmpeg to manipulate SAR "to get things fit".

But at least it renders respectably, though hard to say with
ffplay
if player would actually see resolution change and resize accordingly .

And, that's .h264 ; and I need final output to be .mp4.


Use
-vf
filter

I tried enforcing SAR using
-vf 'scale=trunc(iw/2)*2:trunc(ih/2)*2,setsar=1:1'
(scaling is to deal with odd dimension JPEGs), but it still produces frames with SAR like stated earlier.

Other thoughts


For now, while I haven't given up, I'm trying to avoid in my code examining each individual JEPG in a batch to see if there are differing sizes, and splitting batch so that each sub-batch is homogenous resolution-wise, and generating individual intermediate .h264 so that SAR remains sane, and keep fingers crossed that the final would work correctly. It'll be very slow, unfortunately.


Question


What's the right way to deal with all that using
ffmpeg
, and how to concatenate mulitple varying resolution sources into a final mp4 so that it respects resolution changes mid-stream ?

- H.264 video (encoded using CUDA NVENC).

-
How Do I Get Python To Capture My Screen At The Right Frame Rate
14 juillet 2024, par John ThesaurusI have this python script that is supposed to record my screen, on mac os.


import cv2
import numpy as np
from PIL import ImageGrab
import subprocess
import time

def record_screen():
 # Define the screen resolution
 screen_width, screen_height = 1440, 900 # Adjust this to match your screen resolution
 fps = 30 # Target FPS for recording

 # Define the ffmpeg command
 ffmpeg_cmd = [
 'ffmpeg',
 '-y', # Overwrite output file if it exists
 '-f', 'rawvideo',
 '-vcodec', 'rawvideo',
 '-pix_fmt', 'bgr24',
 '-s', f'{screen_width}x{screen_height}', # Size of one frame
 '-r', str(fps), # Input frames per second
 '-i', '-', # Input from pipe
 '-an', # No audio
 '-vcodec', 'libx264',
 '-pix_fmt', 'yuv420p',
 '-crf', '18', # Higher quality
 '-preset', 'medium', # Encoding speed
 'screen_recording.mp4'
 ]

 # Start the ffmpeg process
 ffmpeg_process = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE)

 frame_count = 0
 start_time = time.time()

 while True:
 # Capture the screen
 img = ImageGrab.grab()
 img_np = np.array(img)

 # Convert and resize the frame
 frame = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR)
 resized_frame = cv2.resize(frame, (screen_width, screen_height))

 # Write the frame to ffmpeg
 ffmpeg_process.stdin.write(resized_frame.tobytes())

 # Display the frame
 cv2.imshow('Screen Recording', resized_frame)

 # Stop recording when 'q' is pressed
 if cv2.waitKey(1) & 0xFF == ord('q'):
 break

 # Close the ffmpeg process
 ffmpeg_process.stdin.close()
 ffmpeg_process.wait()

 # Release everything when job is finished
 cv2.destroyAllWindows()

if __name__ == "__main__":
 record_screen()





As you can see, it should be 30 frames per second, but the problem is that when I open the file afterwards its all sped up. I think it has to do with the frame capture rate as oppose to the encoded rate. I'm not quite sure though. If I try to speed the video down afterwards so that it plays in real time the video is just really choppy. And the higher I make the fps, the faster the video plays, meaning the more I have to slow it down and then its still choppy. I'm pretty sure that it captures frames at a really slow rate and then puts them in a video and plays it back at 30fps. Can anyone fix this ? Anything that gets a working screen recorder on mac os I will take.