Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (99)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

Sur d’autres sites (10134)

  • Is there another way to export a frame in ffmpeg to a texture2d ? My code working in Windows but not Linux

    5 décembre 2024, par Robert Russell

    Sound is working in Linux the same as it did in Windows. But the video is just a black screen and when I attempt to save the frames as BMP files all of them were corrupt/empty files. I am using Ffmpeg.Autogen to interface with the libraries. https://github.com/Ruslan-B/FFmpeg.AutoGen. The file is VP8 and OGG in a MKV container. Though the extension is AVI for some reason.

    



    I tried messing with the order of the code a bit. I checked to make sure the build of Ffmpeg on Linux had VP8. I was searching online but was having trouble finding another way to do what I am doing. This is to contribute to the OpenVIII project. My fork-> https://github.com/Sebanisu/OpenVIII

    



    This just preps the scaler to change the pixelformat or else people have blue faces.

    



            private void PrepareScaler()
        {

            if (MediaType != AVMediaType.AVMEDIA_TYPE_VIDEO)
            {
                return;
            }

            ScalerContext = ffmpeg.sws_getContext(
                Decoder.CodecContext->width, Decoder.CodecContext->height, Decoder.CodecContext->pix_fmt,
                Decoder.CodecContext->width, Decoder.CodecContext->height, AVPixelFormat.AV_PIX_FMT_RGBA,
                ffmpeg.SWS_ACCURATE_RND, null, null, null);
            Return = ffmpeg.sws_init_context(ScalerContext, null, null);

            CheckReturn();
        }


    



    Converts Frame to BMP
I am thinking this is where the problem is. Because I had added bitmap.save to this and got empty BMPs.

    



    public Bitmap FrameToBMP()
        {
            Bitmap bitmap = null;
            BitmapData bitmapData = null;

            try
            {
                bitmap = new Bitmap(Decoder.CodecContext->width, Decoder.CodecContext->height, PixelFormat.Format32bppArgb);
                AVPixelFormat v = Decoder.CodecContext->pix_fmt;

                // lock the bitmap
                bitmapData = bitmap.LockBits(new Rectangle(0, 0, Decoder.CodecContext->width, Decoder.CodecContext->height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);

                byte* ptr = (byte*)(bitmapData.Scan0);

                byte*[] srcData = { ptr, null, null, null };
                int[] srcLinesize = { bitmapData.Stride, 0, 0, 0 };

                // convert video frame to the RGB bitmap
                ffmpeg.sws_scale(ScalerContext, Decoder.Frame->data, Decoder.Frame->linesize, 0, Decoder.CodecContext->height, srcData, srcLinesize); //sws_scale broken on linux?
            }
            finally
            {
                if (bitmap != null && bitmapData != null)
                {
                    bitmap.UnlockBits(bitmapData);
                }
            }
            return bitmap;

        }


    



    After I get a bitmap we turn it into a Texture2D so we can draw it.

    



     public Texture2D FrameToTexture2D()
        {
            //Get Bitmap. there might be a way to skip this step.
            using (Bitmap frame = FrameToBMP())
            {
                //string filename = Path.Combine(Path.GetTempPath(), $"{Path.GetFileNameWithoutExtension(DecodedFileName)}_rawframe.{Decoder.CodecContext->frame_number}.bmp");

                //frame.Save(filename);
                BitmapData bmpdata = null;
                Texture2D frameTex = null;
                try
                {
                    //Create Texture
                    frameTex = new Texture2D(Memory.spriteBatch.GraphicsDevice, frame.Width, frame.Height, false, SurfaceFormat.Color); //GC will collect frameTex
                                                                                                                                        //Fill it with the bitmap.
                    bmpdata = frame.LockBits(new Rectangle(0, 0, frame.Width, frame.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);// System.Drawing.Imaging.PixelFormat.Format32bppArgb);
                    byte[] texBuffer = new byte[bmpdata.Width * bmpdata.Height * 4]; //GC here
                    Marshal.Copy(bmpdata.Scan0, texBuffer, 0, texBuffer.Length);

                    frameTex.SetData(texBuffer);


                }
                finally
                {
                    if (bmpdata != null)
                    {
                        frame.UnlockBits(bmpdata);
                    }
                }
                return frameTex;

            }
        }


    



    I can post more if you want it's pretty much all up on my fork

    



    Video will play back as it does in Windows. As smooth as 15 fps can be. :)

    


  • How to sync network audio with a different network video and play it with chewie

    26 mars 2023, par Rudra Sharma

    I am trying to stream a reddit videos on my app. For that reason I am using Reddit API but it is only giving the video url like 'redd.it/mpym0z9q8opa1/DASH_1080.mp4 ?source=fallback' with no audio but after some research I found out that we can get audio url by editing video url 'redd.it/mpym0z9q8opa1/DASH_audio.mp4 ?source=fallback'.

    


    Now I have both audio and video url with me how can I sync them on network and stream them on my app using chewie package (video player).

    


    This my code so far

    


    import &#x27;dart:async&#x27;;&#xA;import &#x27;dart:convert&#x27;;&#xA;import &#x27;package:http/http.dart&#x27; as http;&#xA;import &#x27;package:flutter/material.dart&#x27;;&#xA;import &#x27;package:video_player/video_player.dart&#x27;;&#xA;import &#x27;package:chewie/chewie.dart&#x27;;&#xA;&#xA;void main() => runApp(MyApp());&#xA;&#xA;class MyApp extends StatelessWidget {&#xA;  @override&#xA;  Widget build(BuildContext context) {&#xA;    return MaterialApp(&#xA;      title: &#x27;Reddit Videos&#x27;,&#xA;      theme: ThemeData(&#xA;        primarySwatch: Colors.blue,&#xA;        visualDensity: VisualDensity.adaptivePlatformDensity,&#xA;      ),&#xA;      home: VideoPlayerScreen(),&#xA;    );&#xA;  }&#xA;}&#xA;&#xA;class VideoPlayerScreen extends StatefulWidget {&#xA;  @override&#xA;  _VideoPlayerScreenState createState() => _VideoPlayerScreenState();&#xA;}&#xA;&#xA;class _VideoPlayerScreenState extends State<videoplayerscreen> {&#xA;  final List<string> _videoUrls = [];&#xA;&#xA;  @override&#xA;  void initState() {&#xA;    super.initState();&#xA;    _loadVideos();&#xA;  }&#xA;&#xA;  Future<void> _loadVideos() async {&#xA;    try {&#xA;      final videoUrls =&#xA;          await RedditApi.getVideoUrlsFromSubreddit(&#x27;aww&#x27;);&#xA;&#xA;      setState(() {&#xA;        _videoUrls.addAll(videoUrls);&#xA;      });&#xA;    } catch (e) {&#xA;      print(e);&#xA;    }&#xA;  }&#xA;&#xA;  @override&#xA;  Widget build(BuildContext context) {&#xA;    return Scaffold(&#xA;      appBar: AppBar(&#xA;        title: Text(&#x27;Reddit Videos&#x27;),&#xA;      ),&#xA;      body: SafeArea(&#xA;        child: _videoUrls.isNotEmpty&#xA;            ? _buildVideosList()&#xA;            : Center(child: CircularProgressIndicator()),&#xA;      ),&#xA;    );&#xA;  }&#xA;&#xA;  Widget _buildVideosList() {&#xA;    return ListView.builder(&#xA;      itemCount: _videoUrls.length,&#xA;      itemBuilder: (context, index) {&#xA;        return Padding(&#xA;          padding: const EdgeInsets.all(8.0),&#xA;          child: Chewie(&#xA;            controller: ChewieController(&#xA;              videoPlayerController: VideoPlayerController.network(&#xA;                _videoUrls[index],&#xA;              ),&#xA;              aspectRatio: 9 / 16,&#xA;              autoPlay: true,&#xA;              looping: true,&#xA;              autoInitialize: true,&#xA;            ),&#xA;          ),&#xA;        );&#xA;      },&#xA;    );&#xA;  }&#xA;}&#xA;&#xA;class RedditApi {&#xA;  static const String BASE_URL = &#x27;https://www.reddit.com&#x27;;&#xA;  static const String CLIENT_ID = &#x27;id&#x27;;&#xA;  static const String CLIENT_SECRET = &#x27;secret&#x27;;&#xA;&#xA;  static Future> getVideoUrlsFromSubreddit(&#xA;      String subredditName) async {&#xA;    final response = await http.get(&#xA;        Uri.parse(&#x27;$BASE_URL/r/$subredditName/top.json?limit=10&#x27;),&#xA;        headers: {&#x27;Authorization&#x27;: &#x27;Client-ID $CLIENT_ID&#x27;});&#xA;&#xA;    if (response.statusCode == 200) {&#xA;      final jsonData = jsonDecode(response.body);&#xA;      final postsData = jsonData[&#x27;data&#x27;][&#x27;children&#x27;];&#xA;&#xA;      final videoUrls = <string>[];&#xA;&#xA;      for (var postData in postsData) {&#xA;        if (postData[&#x27;data&#x27;][&#x27;is_video&#x27;]) {&#xA;          videoUrls.add(postData[&#x27;data&#x27;][&#x27;media&#x27;][&#x27;reddit_video&#x27;]&#xA;              [&#x27;fallback_url&#x27;]);&#xA;        }&#xA;      }&#xA;&#xA;      return videoUrls;&#xA;    } else {&#xA;      throw Exception("Failed to load videos from subreddit");&#xA;    }&#xA;  }&#xA;}&#xA;</string></void></string></videoplayerscreen>

    &#xA;

    I think the code is self explainatory about what I am trying to achieve (Trying to make a client for reddit).

    &#xA;

  • FFmpeg - Issue scaling and overlaying image

    19 juillet 2019, par HB.

    Firstly, the screen dimensions of the device I’m using is 1080 x 2280 pixels, 19:9 ratio, this is important and will be explained later in the question.


    Few months ago I asked this question. The answer provided worked perfectly :

    "-i", video.mp4, "-i", image.png, "-filter_complex", "[0:v]pad=iw:2*trunc(iw*16/9/2):(ow-iw)/2:(oh-ih)/2[v0];[1:v][v0]scale2ref[v1][v0];[v0][v1]overlay=x=(W-w)/2:y=(H-h)/2[v]", "-map", "[v]", "-map", "0:a", "-c:v", "libx264", "-preset", "ultrafast", "-r", outFPS, output.mp4

    Shortly after I implemented and releasing this, I started getting messages from users complaining that the image that was placed on-top of the video is not at the same position after saving it.
    I noticed that in the command above the ratio for the pad is set for 16:9 ratio, in other words the above will not work on devices that has a screen ratio of 19:9.

    I then asked another question about this issue, and after a long conversation with @Gyan, the command is changed to the following :

    "-i", video.mp4, "-i", image.png, "-filter_complex", "[0:v]scale=iw*sar:ih,setsar=1,pad='max(iw\,2*trunc(ih*9/16/2))':'max(ih\,2*trunc(ow*16/9/2))':(ow-iw)/2:(oh-ih)/2[v0];[1:v][v0]scale2ref[v1][v0];[v0][v1]overlay=x=(W-w)/2:y=(H-h)/2[v]", "-map", "[v]", "-map", "0:a", "-c:v", "libx264", "-preset", "ultrafast", "-r", outFPS, output.mp4

    Testing on a device that has a 16:9 ratio works perfectly.


    Now testing with the device mentioned above, I replace the ratio in the command to the following (19/9/2 and 9/19/2) :

    "-i", video.mp4, "-i", image.png, "-filter_complex", "[0:v]scale=iw*sar:ih,setsar=1,pad='max(iw\,2*trunc(ih*9/19/2))':'max(ih\,2*trunc(ow*19/9/2))':(ow-iw)/2:(oh-ih)/2[v0];[1:v][v0]scale2ref[v1][v0];[v0][v1]overlay=x=(W-w)/2:y=(H-h)/2[v]", "-map", "[v]", "-map", "0:a", "-c:v", "libx264", "-preset", "ultrafast", "-r", outFPS, output.mp4

    Here is the result I get :

    I changed my players background to green to make it easier to see. The blue line is the image that I want to overlay

    Original video

    original

    After processing

    after

    Here is the issues with the above.

    • The line that was drawn on the original video is not scaled, but it is still at the correct position.
    • The video is no longer the original size, the width and hight is reduced and my players background can now be seen on the left and right of the video.

    Here is the result I’m trying to achieve :

    correct

    You will notice the following :

    • The video is not resized, it still has the same dimensions as the original.
    • The line that was drawn is still at the same position and is not scaled
    • Black padding was added to the top and bottom of the video, to fill the remaining space. The green background is no longer visible.

    Any advice to achieve the above would greatly be appreciated.

    I will be giving 300 bounty points to the user that can help me fix this.


    EDIT 1

    Here is an input video, image and the expected output as asked for in the comment section. This is using a device that has 16:9 aspect ratio and screen dimensions of 1920x1080.

    Here is another example of the expected output (I also included the input image and input video).


    EDIT 2

    I think it’s worth mentioning that the input image will always be the size/dimensions of the devices screen, so it will always have the same aspect ratio as the screen as well. The size/dimensions of input video will vary.