
Recherche avancée
Autres articles (46)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (11962)
-
What video format will allow Android MediaPlayer.seekTo() to reliably provide frame-accurate scrubbing ?
8 juillet 2015, par Tim ClossWe have an iOS app that we are currently rebuilding for Android. The app relies on being able to scrub video with frame accuracy. We have 3D animations that are rendered out as single frames ; we build subsets of frames into lots of small (1-2 second) videos ; and the app provides the ability to scrub those videos and see each individual frame.
The MP4 videos we initially created work fine on iOS. When we tried to get them working on Android (using the MediaPlayer class), we entered a world of pain ! What we need to do is find a video format that will play and allow frame-accurate scrubbing across all Android devices, using MediaPlayer.seekTo(). Initially we are targetting Android 3.0 and above, but we probably want to stretch back to 2.3.3 after our initial release. Here’s what I’ve discovered so far :
(A) Android claims that H264 "baseline profile" should be supported everywhere : (URL). However, within that, there are dozens of other settings that may or may not be supported. Is there a more fine-grained list anywhere ? Currently we are converting to H264 within an MP4 container.
(B) I haven’t yet seen an Android device that will accurately scrub H264 files without inserting keyframes ("intra frames"). iOS will happily take H264 files without keyframes and provide accurate scrubbing. It seems that, to allow accurate scrubbing, we need to insert a keyframe for every frame of the video (the relevant ffmpeg setting is "-g 1"). This significantly increases the file size.
(C) However, inserting a keyframe for every frame results in a video that will not play at all on the Samsung Galaxy Note 3 (Snapdragon chipset I believe). Reducing the keyframes to every second frame or above seems to work (ffmpeg setting "-g 2").
To summarise :
MediaPlayer.seekTo() seems very dependent on the video format, and varies across devices. Is this the intention ? Is there a base level of behaviour that seekTo() is supposed to provide, regardless of format ?What video format that will allow frame-accurate scrubbing (using MediaPlayer.seekTo()) across all Android devices (at least for 3.0 and above ?)
-
Seeking CLI Tool for Creating Text Animations with Easing Curves [closed]
15 novembre 2023, par anonymous-devI'm working on a video project where I need to animate text using various easing curves for smooth and dynamic transitions with the terminal. Specifically, I'm looking to apply the following easing curves to text animations :


bounceIn
bounceInOut
bounceOut
decelerate
ease
easeIn
easeInBack
easeInCirc
easeInCubic
easeInExpo
easeInOut
easeInOutBack
easeInOutCirc
easeInOutCubic
easeInOutCubicEmphasized
easeInOutExpo
easeInOutQuad
easeInOutQuart
easeInOutQuint
easeInOutSine
easeInQuad
easeInQuart
easeInQuint
easeInSine
easeInToLinear
easeOut
easeOutBack
easeOutCirc
easeOutCubic
easeOutExpo
easeOutQuad
easeOutQuart
easeOutQuint
easeOutSine
elasticIn
elasticInOut
elasticOut
fastEaseInToSlowEaseOut
fastLinearToSlowEaseIn
fastOutSlowIn
linearToEaseOut
slowMiddle



My initial thought was to use ffmpeg for this task, however, it appears that ffmpeg may not support these advanced easing curves for text animation.


I am seeking recommendations for a command-line interface (CLI) tool that can handle these types of animations.


Key requirements include :


- 

- Easing Curve Support : The tool should support a wide range of easing curves as listed above.
- Efficiency : Ability to render animations quickly, preferably with performance close to what I can achieve with ffmpeg filters.
- Direct Rendering : Ideally, the tool should render animations in one go, without the need to write each individual frame to disk.
- Should work with transformations such as translate, scale and rotate. For example a text translates from a to b with a basing curve applied to the transition.










I looked into ImageMagick, but it seems more suited for frame-by-frame image processing, which is not efficient for my needs.


Could anyone suggest a CLI tool that fits these criteria ? Or is there a way to extend ffmpeg's capabilities to achieve these animations ?


-
How to process remote audio/video stream on WebRTC server in real-time ? [closed]
7 septembre 2020, par Kartik RokdeI'm new to audio/video streaming. I'm using AntMedia Pro for audio/video conferencing. There will be 5-8 hosts who will be speaking and the expected audience size would be 15-20k (need to mention this as it won't be a P2P conferencing, but an MCU architecture).


I want to give a feature where a user can request for "convert voice to female / robot / whatever", which would let the user hear the manipulated voice in the conference.


From what I know is that I want to do a real-time processing on the server to be able to do this. I want to intercept the stream on the server, and do some processing (change the voice) on each of the tracks, and stream it back to the requestor.


The first challenge I'm facing is how to get the stream and/or the individual tracks on the server ?


I did some research on how to process remote WebRTC streams, real-time on the server. I came across some keywords like
RTMP ingestion
,ffmpeg
.

Here are a few questions I went through, but didn't find answers that I'm looking for :


- 

- Receive webRTC video stream using python opencv in real-time
- Extract frames as images from an RTMP stream in real-time
- android stream real time video to streaming server








I need help in receiving real-time stream on the server (any technology - preferable Python, Golang) and streaming it back.