
Recherche avancée
Médias (3)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (43)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (6042)
-
My journey to Coviu
27 octobre 2015, par silviaMy new startup just released our MVP – this is the story of what got me here.
I love creating new applications that let people do their work better or in a manner that wasn’t possible before.
My first such passion was as a student intern when I built a system for a building and loan association’s monthly customer magazine. The group I worked with was managing their advertiser contacts through a set of paper cards and I wrote a dBase based system (yes, that long ago) that would manage their customer relationships. They loved it – until it got replaced by an SAP system that cost 100 times what I cost them, had really poor UX, and only gave them half the functionality. It was a corporate system with ongoing support, which made all the difference to them.
The story repeated itself with a CRM for my Uncle’s construction company, and with a resume and quotation management system for Accenture right after Uni, both of which I left behind when I decided to go into research.
Even as a PhD student, I never lost sight of challenges that people were facing and wanted to develop technology to overcome problems. The aim of my PhD thesis was to prepare for the oncoming onslaught of audio and video on the Internet (yes, this was 1994 !) by developing algorithms to automatically extract and locate information in such files, which would enable users to structure, index and search such content.
Many of the use cases that we explored are now part of products or continue to be challenges : finding music that matches your preferences, identifying music or video pieces e.g. to count ads on the radio or to mark copyright infringement, or the automated creation of video summaries such as trailers.
This continued when I joined the CSIRO in Australia – I was working on segmenting speech into words or talk spurts since that would simplify captioning & subtitling, and on MPEG-7 which was a (slightly over-engineered) standard to structure metadata about audio and video.
In 2001 I had the idea of replicating the Web for videos : i.e. creating hyperlinked and searchable video-only experiences. We called it “Annodex” for annotated and indexed video and it needed full-screen hyperlinked video in browsers – man were we ahead of our time ! It was my first step into standards, got several IETF RFCs to my name, and started my involvement with open codecs through Xiph.
Around the time that YouTube was founded in 2006, I founded Vquence – originally a video search company for the Web, but pivoted to a video metadata mining company. Vquence still exists and continues to sell its data to channel partners, but it lacks the user impact that has always driven my work.
As the video element started being developed for HTML5, I had to get involved. I contributed many use cases to the W3C, became a co-editor of the HTML5 spec and focused on video captioning with WebVTT while contracting to Mozilla and later to Google. We made huge progress and today the technology exists to publish video on the Web with captions, making the Web more inclusive for everybody. I contributed code to YouTube and Google Chrome, but was keen to make a bigger impact again.
The opportunity came when a couple of former CSIRO colleagues who now worked for NICTA approached me to get me interested in addressing new use cases for video conferencing in the context of WebRTC. We worked on a kiosk-style solution to service delivery for large service organisations, particularly targeting government. The emerging WebRTC standard posed many technical challenges that we addressed by building rtc.io , by contributing to the standards, and registering bugs on the browsers.
Fast-forward through the development of a few further custom solutions for customers in health and education and we are starting to see patterns of need emerge. The core learning that we’ve come away with is that to get things done, you have to go beyond “talking heads” in a video call. It’s not just about seeing the other person, but much more about having a shared view of the things that need to be worked on and a shared way of interacting with them. Also, we learnt that the things that are being worked on are quite varied and may include multiple input cameras, digital documents, Web pages, applications, device data, controls, forms.
So we set out to build a solution that would enable productive remote collaboration to take place. It would need to provide an excellent user experience, it would need to be simple to work with, provide for the standard use cases out of the box, yet be architected to be extensible for specialised data sharing needs that we knew some of our customers had. It would need to be usable directly on Coviu.com, but also able to integrate with specialised applications that some of our customers were already using, such as the applications that they spend most of their time in (CRMs, practice management systems, learning management systems, team chat systems). It would need to require our customers to sign up, yet their clients to join a call without sign-up.
Collaboration is a big problem. People are continuing to get more comfortable with technology and are less and less inclined to travel distances just to get a service done. In a country as large as Australia, where 12% of the population lives in rural and remote areas, people may not even be able to travel distances, particularly to receive or provide recurring or specialised services, or to achieve work/life balance. To make the world a global village, we need to be able to work together better remotely.
The need for collaboration is being recognised by specialised Web applications already, such as the LiveShare feature of Invision for Designers, Codassium for pair programming, or the recently announced Dropbox Paper. Few go all the way to video – WebRTC is still regarded as a complicated feature to support.
With Coviu, we’d like to offer a collaboration feature to every Web app. We now have a Web app that provides a modern and beautifully designed collaboration interface. To enable other Web apps to integrate it, we are now developing an API. Integration may entail customisation of the data sharing part of Coviu – something Coviu has been designed for. How to replicate the data and keep it consistent when people collaborate remotely – that is where Coviu makes a difference.
We have started our journey and have just launched free signup to the Coviu base product, which allows individuals to own their own “room” (i.e. a fixed URL) in which to collaborate with others. A huge shout out goes to everyone in the Coviu team – a pretty amazing group of people – who have turned the app from an idea to reality. You are all awesome !
With Coviu you can share and annotate :
- images (show your mum photos of your last holidays, or get feedback on an architecture diagram from a customer),
- pdf files (give a presentation remotely, or walk a customer through a contract),
- whiteboards (brainstorm with a colleague), and
- share an application window (watch a YouTube video together, or work through your task list with your colleagues).
All of these are regarded as “shared documents” in Coviu and thus have zooming and annotations features and are listed in a document tray for ease of navigation.
This is just the beginning of how we want to make working together online more productive. Give it a go and let us know what you think.
The post My journey to Coviu first appeared on ginger’s thoughts.
-
OpenCV VideoWriter using ffmpeg with "Could not open codec 'libx264'" Error
19 août, par user2262504I am new to OpenCV, and I want write Mat images into video using VideoWriter on Ubuntu 12.04. But when constructing VideoWriter, errors came out.



It seems that OpenCV invoke ffmpeg API using default parameters and ffmpeg invoke x264 using its default parameters. Then these setting is broken for libx264. Thus the "Could not open codec 'libx264'" error.



Anyone has ideas to solve this problem ?



More specifically :



- 

- anyone knows where and how OpenCV invoke ffmpeg API ?
- how to change ffmpeg default settings using code, hopefull, can be easily embeded into OpenCV ?
- will changes of default in ffmpeg be carried to libx264 ?









Errors :



1. Uising CV_FOURCC('H', '2', '6', '4')
[libx264 @ 0x255de40] broken ffmpeg default settings detected
[libx264 @ 0x255de40] use an encoding preset (e.g. -vpre medium)
[libx264 @ 0x255de40] preset usage: -vpre <speed> -vpre <profile>
[libx264 @ 0x255de40] speed presets are listed in x264 --help
[libx264 @ 0x255de40] profile is optional; x264 defaults to high
Could not open codec 'libx264': Unspecified error

2. Using FOURCC = -1 to invoke user customized codec
OpenCV Error: Unsupported format or combination of formats (Gstreamer Opencv 
backend doesn't support this codec acutally.) in CvVideoWriter_GStreamer::open, 
file /home/XXX/Downloads/opencv-2.4.8/modules/highgui/src/cap_gstreamer.cpp, 
line 505 terminate called after throwing an instance of 'cv::Exception'
what(): /home/XXX/Downloads/opencv-2.4.8/modules/highgui/src/cap_gstreamer.cpp:
505: error: (-210) Gstreamer Opencv backend doesn't support this codec acutally.
in function CvVideoWriter_GStreamer::open
</profile></speed>



Codes :



int main(int argc, char *argv[])
{
 VideoWriter outputVideo;
 bool fourcc_on = true; //switch on / off different error
 if (fourcc_on)
 outputVideo.open("outVideo.avi", CV_FOURCC('H', '2', '6', '4'), 25, Size(100, 100), true);
 else
 outputVideo.open("outVideo.avi", -1, 25, Size(100, 100), true);

 if (!outputVideo.isOpened())
 {
 cout << "Could not open the output video for write" << endl;
 return -1;
 }
 return 0;
}




OpenCV Configuration :



-- Detected version of GNU GCC: 46 (406)
-- Found OpenEXR: /usr/lib/libIlmImf.so
-- Looking for linux/videodev.h
-- Looking for linux/videodev.h - not found
-- Looking for linux/videodev2.h
-- Looking for linux/videodev2.h - found
-- Looking for sys/videoio.h
-- Looking for sys/videoio.h - not found
-- Looking for libavformat/avformat.h
-- Looking for libavformat/avformat.h - found
-- Looking for ffmpeg/avformat.h
-- Looking for ffmpeg/avformat.h - not found
-- Could NOT find JNI (missing: JAVA_INCLUDE_PATH JAVA_INCLUDE_PATH2 JAVA_AWT_INCLUDE_PATH) 
-- 
-- General configuration for OpenCV 2.4.8 =====================================
-- Version control: unknown
-- 
-- Platform:
-- Host: Linux 3.8.0-38-generic x86_64
-- CMake: 2.8.7
-- CMake generator: Unix Makefiles
-- CMake build tool: /usr/bin/make
-- Configuration: RELEASE
-- 
-- C/C++:
-- Built as dynamic libs?: YES
-- C++ Compiler: /usr/bin/c++ (ver 4.6)
-- C++ flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -msse3 -ffunction-sections -O3 -DNDEBUG -DNDEBUG
-- C++ flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -msse3 -ffunction-sections -g -O0 -DDEBUG -D_DEBUG
-- C Compiler: /usr/bin/gcc
-- C flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -msse3 -ffunction-sections -O3 -DNDEBUG -DNDEBUG
-- C flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -msse3 -ffunction-sections -g -O0 -DDEBUG -D_DEBUG
-- Linker flags (Release): 
-- Linker flags (Debug): 
-- Precompiled headers: YES
-- 
-- OpenCV modules:
-- To be built: core flann imgproc highgui features2d calib3d ml video legacy objdetect photo gpu ocl nonfree contrib python stitching superres ts videostab
-- Disabled: world
-- Disabled by dependency: -
-- Unavailable: androidcamera dynamicuda java
-- 
-- GUI: 
-- QT: NO
-- GTK+ 2.x: YES (ver 2.24.10)
-- GThread : YES (ver 2.32.4)
-- GtkGlExt: NO
-- OpenGL support: NO
-- 
-- Media I/O: 
-- ZLib: /usr/lib/x86_64-linux-gnu/libz.so (ver 1.2.3.4)
-- JPEG: /usr/lib/x86_64-linux-gnu/libjpeg.so (ver )
-- PNG: /usr/lib/x86_64-linux-gnu/libpng.so (ver 1.2.46)
-- TIFF: /usr/lib/x86_64-linux-gnu/libtiff.so (ver 42 - 3.9.5)
-- JPEG 2000: /usr/lib/x86_64-linux-gnu/libjasper.so (ver 1.900.1)
-- OpenEXR: /usr/lib/libImath.so /usr/lib/libIlmImf.so /usr/lib/libIex.so /usr/lib/libHalf.so /usr/lib/libIlmThread.so (ver 1.6.1)
-- 
-- Video I/O:
-- DC1394 1.x: NO
-- DC1394 2.x: YES (ver 2.2.0)
-- FFMPEG: YES
-- codec: YES (ver 55.58.105)
-- format: YES (ver 55.37.101)
-- util: YES (ver 52.78.100)
-- swscale: YES (ver 2.6.100)
-- gentoo-style: YES
-- GStreamer: 
-- base: YES (ver 0.10.36)
-- app: YES (ver 0.10.36)
-- video: YES (ver 0.10.36)
-- OpenNI: NO
-- OpenNI PrimeSensor Modules: NO
-- PvAPI: NO
-- GigEVisionSDK: NO
-- UniCap: NO
-- UniCap ucil: NO
-- V4L/V4L2: Using libv4l (ver 1.0.1)
-- XIMEA: NO
-- Xine: NO
-- 
-- Other third-party libraries:
-- Use IPP: NO
-- Use Eigen: NO
-- Use TBB: NO
-- Use OpenMP: NO
-- Use GCD NO
-- Use Concurrency NO
-- Use C=: NO
-- Use Cuda: NO
-- Use OpenCL: YES
-- 
-- OpenCL:
-- Version: dynamic
-- Include path: /home/shixudongleo/Downloads/opencv-2.4.8/3rdparty/include/opencl/1.2
-- Use AMD FFT: NO
-- Use AMD BLAS: NO
-- 
-- Python:
-- Interpreter: /usr/bin/python (ver 2.7.3)
-- Libraries: /usr/lib/libpython2.7.so
-- numpy: /usr/lib/python2.7/dist-packages/numpy/core/include (ver 1.6.1)
-- packages path: lib/python2.7/dist-packages
-- 
-- Java:
-- ant: NO
-- JNI: NO
-- Java tests: NO
-- 
-- Documentation:
-- Build Documentation: NO
-- Sphinx: NO
-- PdfLaTeX compiler: /usr/bin/pdflatex
-- 
-- Tests and samples:
-- Tests: YES
-- Performance tests: YES
-- C/C++ Examples: NO
-- 
-- Install path: /usr/local
-- 
-- cvconfig.h is in: /home/shixudongleo/Downloads/opencv-2.4.8/build
-- -----------------------------------------------------------------
-- 
-- Configuring done
-- Generating done
-- Build files have been written to: /home/XXX/Downloads/opencv-2.4.8/build




FFMPEG



ffmpeg is enable to support OpenCV and libx264 is enabled when compiling ffmpeg.
By using ffmpeg command line, libx264 is running normally.



$ ffmpeg -i test.avi -vcodec libx264 test.mp4
ffmpeg -i test.avi -vcodec libx264 test.mp4 > ~/Downloads/ffmpeg_log.txt
ffmpeg version 2.2.git Copyright (c) 2000-2014 the FFmpeg developers
 built on Apr 24 2014 16:39:51 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5)
 configuration: --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-nonfree --enable-postproc --enable-version3 --enable-x11grab --enable-shared --enable-pic
 libavutil 52. 78.100 / 52. 78.100
 libavcodec 55. 58.105 / 55. 58.105
 libavformat 55. 37.101 / 55. 37.101
 libavdevice 55. 13.100 / 55. 13.100
 libavfilter 4. 4.100 / 4. 4.100
 libswscale 2. 6.100 / 2. 6.100
 libswresample 0. 18.100 / 0. 18.100
 libpostproc 52. 3.100 / 52. 3.100
Input #0, avi, from 'test.avi':
 Duration: 00:00:03.73, start: 0.000000, bitrate: 1757 kb/s
 Stream #0:0: Video: msvideo1 (CRAM / 0x4D415243), rgb555le, 320x240, 1781 kb/s, 15 tbr, 15 tbn, 15 tbc
 Metadata:
 title : julius.avi Video #1
File 'test.mp4' already exists. Overwrite ? [y/N] y
No pixel format specified, yuv444p for H.264 encoding chosen.
Use -pix_fmt yuv420p for compatibility with outdated media players.
[libx264 @ 0x25d08e0] using cpu capabilities: none!
[libx264 @ 0x25d08e0] profile High 4:4:4 Predictive, level 1.2, 4:4:4 8-bit
[libx264 @ 0x25d08e0] 264 - core 142 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=12 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=15 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'test.mp4':
 Metadata:
 encoder : Lavf55.37.101
 Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv444p, 320x240, q=-1--1, 15360 tbn, 15 tbc
 Metadata:
 title : julius.avi Video #1
Stream mapping:
 Stream #0:0 -> #0:0 (msvideo1 -> libx264)
Press [q] to stop, [?] for help
frame= 56 fps=0.0 q=-1.0 Lsize= 321kB time=00:00:03.60 bitrate= 731.0kbits/s 
video:320kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.409949%
[libx264 @ 0x25d08e0] frame I:3 Avg QP:15.36 size: 7975
[libx264 @ 0x25d08e0] frame P:38 Avg QP:26.05 size: 6230
[libx264 @ 0x25d08e0] frame B:15 Avg QP:28.25 size: 4418
[libx264 @ 0x25d08e0] consecutive B-frames: 46.4% 53.6% 0.0% 0.0%
[libx264 @ 0x25d08e0] mb I I16..4: 1.4% 72.8% 25.8%
[libx264 @ 0x25d08e0] mb P I16..4: 1.6% 5.7% 15.1% P16..4: 7.6% 6.3% 7.4% 0.0% 0.0% skip:56.3%
[libx264 @ 0x25d08e0] mb B I16..4: 0.2% 1.0% 2.0% B16..8: 13.3% 7.8% 8.7% direct: 8.3% skip:58.8% L0:34.9% L1:36.6% BI:28.5%
[libx264 @ 0x25d08e0] 8x8 transform intra:37.7% inter:2.3%
[libx264 @ 0x25d08e0] coded y,u,v intra: 52.1% 42.1% 30.1% inter: 19.6% 9.2% 5.2%
[libx264 @ 0x25d08e0] i16 v,h,dc,p: 56% 17% 24% 2%
[libx264 @ 0x25d08e0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 10% 16% 68% 1% 1% 1% 1% 1% 1%
[libx264 @ 0x25d08e0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 18% 28% 5% 6% 5% 7% 5% 6%
[libx264 @ 0x25d08e0] Weighted P-Frames: Y:31.6% UV:21.1%
[libx264 @ 0x25d08e0] ref P L0: 70.5% 9.0% 12.1% 6.5% 2.0%
[libx264 @ 0x25d08e0] ref B L0: 91.3% 8.7%
[libx264 @ 0x25d08e0] kb/s:700.56



-
Cutting a live stream into separate mp4 files
9 juin 2017, par FearhunterI am doing a research for cutting a live stream in piece and save it as mp4 files. I am using this source for the proof of concept :
And this is the example code I use :
using System;
using System.Collections.Generic;
using System.Configuration;
using System.IO;
using System.Linq;
using System.Net;
using System.Security.Cryptography;
using System.Text;
using System.Threading.Tasks;
using Microsoft.WindowsAzure.MediaServices.Client;
using Newtonsoft.Json.Linq;
namespace AMSLiveTest
{
class Program
{
private const string StreamingEndpointName = "streamingendpoint001";
private const string ChannelName = "channel001";
private const string AssetlName = "asset001";
private const string ProgramlName = "program001";
// Read values from the App.config file.
private static readonly string _mediaServicesAccountName =
ConfigurationManager.AppSettings["MediaServicesAccountName"];
private static readonly string _mediaServicesAccountKey =
ConfigurationManager.AppSettings["MediaServicesAccountKey"];
// Field for service context.
private static CloudMediaContext _context = null;
private static MediaServicesCredentials _cachedCredentials = null;
static void Main(string[] args)
{
// Create and cache the Media Services credentials in a static class variable.
_cachedCredentials = new MediaServicesCredentials(
_mediaServicesAccountName,
_mediaServicesAccountKey);
// Used the cached credentials to create CloudMediaContext.
_context = new CloudMediaContext(_cachedCredentials);
IChannel channel = CreateAndStartChannel();
// Set the Live Encoder to point to the channel's input endpoint:
string ingestUrl = channel.Input.Endpoints.FirstOrDefault().Url.ToString();
// Use the previewEndpoint to preview and verify
// that the input from the encoder is actually reaching the Channel.
string previewEndpoint = channel.Preview.Endpoints.FirstOrDefault().Url.ToString();
IProgram program = CreateAndStartProgram(channel);
ILocator locator = CreateLocatorForAsset(program.Asset, program.ArchiveWindowLength);
IStreamingEndpoint streamingEndpoint = CreateAndStartStreamingEndpoint();
GetLocatorsInAllStreamingEndpoints(program.Asset);
// Once you are done streaming, clean up your resources.
Cleanup(streamingEndpoint, channel);
}
public static IChannel CreateAndStartChannel()
{
//If you want to change the Smooth fragments to HLS segment ratio, you would set the ChannelCreationOptions’s Output property.
IChannel channel = _context.Channels.Create(
new ChannelCreationOptions
{
Name = ChannelName,
Input = CreateChannelInput(),
Preview = CreateChannelPreview()
});
//Starting and stopping Channels can take some time to execute. To determine the state of operations after calling Start or Stop, query the IChannel.State .
channel.Start();
return channel;
}
private static ChannelInput CreateChannelInput()
{
return new ChannelInput
{
StreamingProtocol = StreamingProtocol.RTMP,
AccessControl = new ChannelAccessControl
{
IPAllowList = new List<iprange>
{
new IPRange
{
Name = "TestChannelInput001",
// Setting 0.0.0.0 for Address and 0 for SubnetPrefixLength
// will allow access to IP addresses.
Address = IPAddress.Parse("0.0.0.0"),
SubnetPrefixLength = 0
}
}
}
};
}
private static ChannelPreview CreateChannelPreview()
{
return new ChannelPreview
{
AccessControl = new ChannelAccessControl
{
IPAllowList = new List<iprange>
{
new IPRange
{
Name = "TestChannelPreview001",
// Setting 0.0.0.0 for Address and 0 for SubnetPrefixLength
// will allow access to IP addresses.
Address = IPAddress.Parse("0.0.0.0"),
SubnetPrefixLength = 0
}
}
}
};
}
public static void UpdateCrossSiteAccessPoliciesForChannel(IChannel channel)
{
var clientPolicy =
@"<?xml version=""1.0"" encoding=""utf-8""?>
<policy>
<domain uri=""></domain>
<resource path=""></resource>"" include-subpaths=""true""/>
</policy>
";
var xdomainPolicy =
@"<?xml version=""1.0"" ?>
";
channel.CrossSiteAccessPolicies.ClientAccessPolicy = clientPolicy;
channel.CrossSiteAccessPolicies.CrossDomainPolicy = xdomainPolicy;
channel.Update();
}
public static IProgram CreateAndStartProgram(IChannel channel)
{
IAsset asset = _context.Assets.Create(AssetlName, AssetCreationOptions.None);
// Create a Program on the Channel. You can have multiple Programs that overlap or are sequential;
// however each Program must have a unique name within your Media Services account.
IProgram program = channel.Programs.Create(ProgramlName, TimeSpan.FromHours(3), asset.Id);
program.Start();
return program;
}
public static ILocator CreateLocatorForAsset(IAsset asset, TimeSpan ArchiveWindowLength)
{
// You cannot create a streaming locator using an AccessPolicy that includes write or delete permissions.
var locator = _context.Locators.CreateLocator
(
LocatorType.OnDemandOrigin,
asset,
_context.AccessPolicies.Create
(
"Live Stream Policy",
ArchiveWindowLength,
AccessPermissions.Read
)
);
return locator;
}
public static IStreamingEndpoint CreateAndStartStreamingEndpoint()
{
var options = new StreamingEndpointCreationOptions
{
Name = StreamingEndpointName,
ScaleUnits = 1,
AccessControl = GetAccessControl(),
CacheControl = GetCacheControl()
};
IStreamingEndpoint streamingEndpoint = _context.StreamingEndpoints.Create(options);
streamingEndpoint.Start();
return streamingEndpoint;
}
private static StreamingEndpointAccessControl GetAccessControl()
{
return new StreamingEndpointAccessControl
{
IPAllowList = new List<iprange>
{
new IPRange
{
Name = "Allow all",
Address = IPAddress.Parse("0.0.0.0"),
SubnetPrefixLength = 0
}
},
AkamaiSignatureHeaderAuthenticationKeyList = new List<akamaisignatureheaderauthenticationkey>
{
new AkamaiSignatureHeaderAuthenticationKey
{
Identifier = "My key",
Expiration = DateTime.UtcNow + TimeSpan.FromDays(365),
Base64Key = Convert.ToBase64String(GenerateRandomBytes(16))
}
}
};
}
private static byte[] GenerateRandomBytes(int length)
{
var bytes = new byte[length];
using (var rng = new RNGCryptoServiceProvider())
{
rng.GetBytes(bytes);
}
return bytes;
}
private static StreamingEndpointCacheControl GetCacheControl()
{
return new StreamingEndpointCacheControl
{
MaxAge = TimeSpan.FromSeconds(1000)
};
}
public static void UpdateCrossSiteAccessPoliciesForStreamingEndpoint(IStreamingEndpoint streamingEndpoint)
{
var clientPolicy =
@"<?xml version=""1.0"" encoding=""utf-8""?>
<policy>
<domain uri=""></domain>
<resource path=""></resource>"" include-subpaths=""true""/>
</policy>
";
var xdomainPolicy =
@"<?xml version=""1.0"" ?>
";
streamingEndpoint.CrossSiteAccessPolicies.ClientAccessPolicy = clientPolicy;
streamingEndpoint.CrossSiteAccessPolicies.CrossDomainPolicy = xdomainPolicy;
streamingEndpoint.Update();
}
public static void GetLocatorsInAllStreamingEndpoints(IAsset asset)
{
var locators = asset.Locators.Where(l => l.Type == LocatorType.OnDemandOrigin);
var ismFile = asset.AssetFiles.AsEnumerable().FirstOrDefault(a => a.Name.EndsWith(".ism"));
var template = new UriTemplate("{contentAccessComponent}/{ismFileName}/manifest");
var urls = locators.SelectMany(l =>
_context
.StreamingEndpoints
.AsEnumerable()
.Where(se => se.State == StreamingEndpointState.Running)
.Select(
se =>
template.BindByPosition(new Uri("http://" + se.HostName),
l.ContentAccessComponent,
ismFile.Name)))
.ToArray();
}
public static void Cleanup(IStreamingEndpoint streamingEndpoint,
IChannel channel)
{
if (streamingEndpoint != null)
{
streamingEndpoint.Stop();
streamingEndpoint.Delete();
}
IAsset asset;
if (channel != null)
{
foreach (var program in channel.Programs)
{
asset = _context.Assets.Where(se => se.Id == program.AssetId)
.FirstOrDefault();
program.Stop();
program.Delete();
if (asset != null)
{
foreach (var l in asset.Locators)
l.Delete();
asset.Delete();
}
}
channel.Stop();
channel.Delete();
}
}
}
}
</akamaisignatureheaderauthenticationkey></iprange></iprange></iprange>Now I want to make a method to cut a live stream for example every 15 minutes and save it as mp4 but don’t know where to start.
Can someone point me in the right direction ?
Kind regards
UPDATE :
I want to save the mp4 files on my hard disk.