
Recherche avancée
Médias (2)
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (99)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)
Sur d’autres sites (10134)
-
Is there another way to export a frame in ffmpeg to a texture2d ? My code working in Windows but not Linux
5 décembre 2024, par Robert RussellSound is working in Linux the same as it did in Windows. But the video is just a black screen and when I attempt to save the frames as BMP files all of them were corrupt/empty files. I am using Ffmpeg.Autogen to interface with the libraries. https://github.com/Ruslan-B/FFmpeg.AutoGen. The file is VP8 and OGG in a MKV container. Though the extension is AVI for some reason.



I tried messing with the order of the code a bit. I checked to make sure the build of Ffmpeg on Linux had VP8. I was searching online but was having trouble finding another way to do what I am doing. This is to contribute to the OpenVIII project. My fork-> https://github.com/Sebanisu/OpenVIII



This just preps the scaler to change the pixelformat or else people have blue faces.



private void PrepareScaler()
 {

 if (MediaType != AVMediaType.AVMEDIA_TYPE_VIDEO)
 {
 return;
 }

 ScalerContext = ffmpeg.sws_getContext(
 Decoder.CodecContext->width, Decoder.CodecContext->height, Decoder.CodecContext->pix_fmt,
 Decoder.CodecContext->width, Decoder.CodecContext->height, AVPixelFormat.AV_PIX_FMT_RGBA,
 ffmpeg.SWS_ACCURATE_RND, null, null, null);
 Return = ffmpeg.sws_init_context(ScalerContext, null, null);

 CheckReturn();
 }




Converts Frame to BMP
I am thinking this is where the problem is. Because I had added bitmap.save to this and got empty BMPs.



public Bitmap FrameToBMP()
 {
 Bitmap bitmap = null;
 BitmapData bitmapData = null;

 try
 {
 bitmap = new Bitmap(Decoder.CodecContext->width, Decoder.CodecContext->height, PixelFormat.Format32bppArgb);
 AVPixelFormat v = Decoder.CodecContext->pix_fmt;

 // lock the bitmap
 bitmapData = bitmap.LockBits(new Rectangle(0, 0, Decoder.CodecContext->width, Decoder.CodecContext->height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);

 byte* ptr = (byte*)(bitmapData.Scan0);

 byte*[] srcData = { ptr, null, null, null };
 int[] srcLinesize = { bitmapData.Stride, 0, 0, 0 };

 // convert video frame to the RGB bitmap
 ffmpeg.sws_scale(ScalerContext, Decoder.Frame->data, Decoder.Frame->linesize, 0, Decoder.CodecContext->height, srcData, srcLinesize); //sws_scale broken on linux?
 }
 finally
 {
 if (bitmap != null && bitmapData != null)
 {
 bitmap.UnlockBits(bitmapData);
 }
 }
 return bitmap;

 }




After I get a bitmap we turn it into a Texture2D so we can draw it.



public Texture2D FrameToTexture2D()
 {
 //Get Bitmap. there might be a way to skip this step.
 using (Bitmap frame = FrameToBMP())
 {
 //string filename = Path.Combine(Path.GetTempPath(), $"{Path.GetFileNameWithoutExtension(DecodedFileName)}_rawframe.{Decoder.CodecContext->frame_number}.bmp");

 //frame.Save(filename);
 BitmapData bmpdata = null;
 Texture2D frameTex = null;
 try
 {
 //Create Texture
 frameTex = new Texture2D(Memory.spriteBatch.GraphicsDevice, frame.Width, frame.Height, false, SurfaceFormat.Color); //GC will collect frameTex
 //Fill it with the bitmap.
 bmpdata = frame.LockBits(new Rectangle(0, 0, frame.Width, frame.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);// System.Drawing.Imaging.PixelFormat.Format32bppArgb);
 byte[] texBuffer = new byte[bmpdata.Width * bmpdata.Height * 4]; //GC here
 Marshal.Copy(bmpdata.Scan0, texBuffer, 0, texBuffer.Length);

 frameTex.SetData(texBuffer);


 }
 finally
 {
 if (bmpdata != null)
 {
 frame.UnlockBits(bmpdata);
 }
 }
 return frameTex;

 }
 }




I can post more if you want it's pretty much all up on my fork



Video will play back as it does in Windows. As smooth as 15 fps can be. :)


-
How to sync network audio with a different network video and play it with chewie
26 mars 2023, par Rudra SharmaI am trying to stream a reddit videos on my app. For that reason I am using Reddit API but it is only giving the video url like 'redd.it/mpym0z9q8opa1/DASH_1080.mp4 ?source=fallback' with no audio but after some research I found out that we can get audio url by editing video url 'redd.it/mpym0z9q8opa1/DASH_audio.mp4 ?source=fallback'.


Now I have both audio and video url with me how can I sync them on network and stream them on my app using chewie package (video player).


This my code so far


import 'dart:async';
import 'dart:convert';
import 'package:http/http.dart' as http;
import 'package:flutter/material.dart';
import 'package:video_player/video_player.dart';
import 'package:chewie/chewie.dart';

void main() => runApp(MyApp());

class MyApp extends StatelessWidget {
 @override
 Widget build(BuildContext context) {
 return MaterialApp(
 title: 'Reddit Videos',
 theme: ThemeData(
 primarySwatch: Colors.blue,
 visualDensity: VisualDensity.adaptivePlatformDensity,
 ),
 home: VideoPlayerScreen(),
 );
 }
}

class VideoPlayerScreen extends StatefulWidget {
 @override
 _VideoPlayerScreenState createState() => _VideoPlayerScreenState();
}

class _VideoPlayerScreenState extends State<videoplayerscreen> {
 final List<string> _videoUrls = [];

 @override
 void initState() {
 super.initState();
 _loadVideos();
 }

 Future<void> _loadVideos() async {
 try {
 final videoUrls =
 await RedditApi.getVideoUrlsFromSubreddit('aww');

 setState(() {
 _videoUrls.addAll(videoUrls);
 });
 } catch (e) {
 print(e);
 }
 }

 @override
 Widget build(BuildContext context) {
 return Scaffold(
 appBar: AppBar(
 title: Text('Reddit Videos'),
 ),
 body: SafeArea(
 child: _videoUrls.isNotEmpty
 ? _buildVideosList()
 : Center(child: CircularProgressIndicator()),
 ),
 );
 }

 Widget _buildVideosList() {
 return ListView.builder(
 itemCount: _videoUrls.length,
 itemBuilder: (context, index) {
 return Padding(
 padding: const EdgeInsets.all(8.0),
 child: Chewie(
 controller: ChewieController(
 videoPlayerController: VideoPlayerController.network(
 _videoUrls[index],
 ),
 aspectRatio: 9 / 16,
 autoPlay: true,
 looping: true,
 autoInitialize: true,
 ),
 ),
 );
 },
 );
 }
}

class RedditApi {
 static const String BASE_URL = 'https://www.reddit.com';
 static const String CLIENT_ID = 'id';
 static const String CLIENT_SECRET = 'secret';

 static Future> getVideoUrlsFromSubreddit(
 String subredditName) async {
 final response = await http.get(
 Uri.parse('$BASE_URL/r/$subredditName/top.json?limit=10'),
 headers: {'Authorization': 'Client-ID $CLIENT_ID'});

 if (response.statusCode == 200) {
 final jsonData = jsonDecode(response.body);
 final postsData = jsonData['data']['children'];

 final videoUrls = <string>[];

 for (var postData in postsData) {
 if (postData['data']['is_video']) {
 videoUrls.add(postData['data']['media']['reddit_video']
 ['fallback_url']);
 }
 }

 return videoUrls;
 } else {
 throw Exception("Failed to load videos from subreddit");
 }
 }
}
</string></void></string></videoplayerscreen>


I think the code is self explainatory about what I am trying to achieve (Trying to make a client for reddit).


-
FFmpeg - Issue scaling and overlaying image
19 juillet 2019, par HB.Firstly, the screen dimensions of the device I’m using is
1080 x 2280 pixels, 19:9 ratio
, this is important and will be explained later in the question.
Few months ago I asked this question. The answer provided worked perfectly :
"-i", video.mp4, "-i", image.png, "-filter_complex", "[0:v]pad=iw:2*trunc(iw*16/9/2):(ow-iw)/2:(oh-ih)/2[v0];[1:v][v0]scale2ref[v1][v0];[v0][v1]overlay=x=(W-w)/2:y=(H-h)/2[v]", "-map", "[v]", "-map", "0:a", "-c:v", "libx264", "-preset", "ultrafast", "-r", outFPS, output.mp4
Shortly after I implemented and releasing this, I started getting messages from users complaining that the image that was placed on-top of the video is not at the same position after saving it.
I noticed that in the command above the ratio for the pad is set for16:9
ratio, in other words the above will not work on devices that has a screen ratio of19:9
.I then asked another question about this issue, and after a long conversation with @Gyan, the command is changed to the following :
"-i", video.mp4, "-i", image.png, "-filter_complex", "[0:v]scale=iw*sar:ih,setsar=1,pad='max(iw\,2*trunc(ih*9/16/2))':'max(ih\,2*trunc(ow*16/9/2))':(ow-iw)/2:(oh-ih)/2[v0];[1:v][v0]scale2ref[v1][v0];[v0][v1]overlay=x=(W-w)/2:y=(H-h)/2[v]", "-map", "[v]", "-map", "0:a", "-c:v", "libx264", "-preset", "ultrafast", "-r", outFPS, output.mp4
Testing on a device that has a
16:9
ratio works perfectly.
Now testing with the device mentioned above, I replace the ratio in the command to the following (
19/9/2
and9/19/2
) :"-i", video.mp4, "-i", image.png, "-filter_complex", "[0:v]scale=iw*sar:ih,setsar=1,pad='max(iw\,2*trunc(ih*9/19/2))':'max(ih\,2*trunc(ow*19/9/2))':(ow-iw)/2:(oh-ih)/2[v0];[1:v][v0]scale2ref[v1][v0];[v0][v1]overlay=x=(W-w)/2:y=(H-h)/2[v]", "-map", "[v]", "-map", "0:a", "-c:v", "libx264", "-preset", "ultrafast", "-r", outFPS, output.mp4
Here is the result I get :
I changed my players background to green to make it easier to see. The blue line is the image that I want to overlay
Original video
After processing
Here is the issues with the above.
- The line that was drawn on the original video is not scaled, but it is still at the correct position.
- The video is no longer the original size, the width and hight is reduced and my players background can now be seen on the left and right of the video.
Here is the result I’m trying to achieve :
You will notice the following :
- The video is not resized, it still has the same dimensions as the original.
- The line that was drawn is still at the same position and is not scaled
- Black padding was added to the top and bottom of the video, to fill the remaining space. The green background is no longer visible.
Any advice to achieve the above would greatly be appreciated.
I will be giving 300 bounty points to the user that can help me fix this.
EDIT 1
Here is an input video, image and the expected output as asked for in the comment section. This is using a device that has
16:9
aspect ratio and screen dimensions of1920x1080
.Here is another example of the expected output (I also included the input image and input video).
EDIT 2
I think it’s worth mentioning that the input image will always be the size/dimensions of the devices screen, so it will always have the same aspect ratio as the screen as well. The size/dimensions of input video will vary.