Recherche avancée

Médias (1)

Mot : - Tags -/iphone

Autres articles (102)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (9349)

  • How to simultaneously capture mic, stream it to RTSP server and play it on iPhone's speaker ?

    24 août 2021, par Norbert Towiański

    I want to capture sound from mic, stream it to RTSP server and play it simultaneously on iPhone's speaker after getting samples from RTSP server. I mean such kind of loop. I use FFMPEGKit and I want to use MobileVLCKit, but unfortunately microphone is off when I start play stream.
I think I've done first step (capturing from microphone and send OutputStream to RTSP server) :

    


    @IBAction func transmitBtnPressed(_ sender: Any) {&#xA;    ffmpeg_transmit()&#xA;}&#xA;&#xA;@IBAction func recordBtnPressed(_ sender: Any) {&#xA;    switch recordingState {&#xA;    case .idle:&#xA;        recordingState = .start&#xA;        startRecording()&#xA;        recordBtn.setTitle("Started", for: .normal)&#xA;        let urlToFile = URL(fileURLWithPath: outPipePath!)&#xA;        outputStream = OutputStream(url: urlToFile, append: false)&#xA;        outputStream!.open()&#xA;    case .capturing:&#xA;        recordingState = .end&#xA;        stopRecording()&#xA;        recordBtn.setTitle("End", for: .normal)&#xA;    default:&#xA;        break&#xA;    }&#xA;}&#xA;&#xA;override func viewDidLoad() {&#xA;    super.viewDidLoad()&#xA;    outPipePath = FFmpegKitConfig.registerNewFFmpegPipe()&#xA;    self.setup()&#xA;}&#xA;&#xA;override func viewDidAppear(_ animated: Bool) {&#xA;    super.viewDidAppear(animated)&#xA;    setUpAuthStatus()&#xA;}&#xA;&#xA;func setUpAuthStatus() {&#xA;    if AVCaptureDevice.authorizationStatus(for: AVMediaType.audio) != .authorized {&#xA;        AVCaptureDevice.requestAccess(for: AVMediaType.audio, completionHandler: { (authorized) in&#xA;            DispatchQueue.main.async {&#xA;                if authorized {&#xA;                    self.setup()&#xA;                }&#xA;            }&#xA;        })&#xA;    }&#xA;}&#xA;&#xA;func setup() {&#xA;    self.session.sessionPreset = AVCaptureSession.Preset.high&#xA;    &#xA;    self.recordingURL = URL(fileURLWithPath: "\(NSTemporaryDirectory() as String)/file.m4a")&#xA;    if self.fileManager.isDeletableFile(atPath: self.recordingURL!.path) {&#xA;        _ = try? self.fileManager.removeItem(atPath: self.recordingURL!.path)&#xA;    }&#xA;    &#xA;    self.assetWriter = try? AVAssetWriter(outputURL: self.recordingURL!,&#xA;                                          fileType: AVFileType.m4a)&#xA;    self.assetWriter!.movieFragmentInterval = CMTime.invalid&#xA;    self.assetWriter!.shouldOptimizeForNetworkUse = true&#xA;    &#xA;    let audioSettings = [&#xA;        AVFormatIDKey: kAudioFormatLinearPCM,&#xA;        AVSampleRateKey: 48000.0,&#xA;        AVNumberOfChannelsKey: 1,&#xA;        AVLinearPCMIsFloatKey: false,&#xA;        AVLinearPCMBitDepthKey: 16,&#xA;        AVLinearPCMIsBigEndianKey: false,&#xA;        AVLinearPCMIsNonInterleaved: false,&#xA;        &#xA;    ] as [String : Any]&#xA;    &#xA;    &#xA;    self.audioInput = AVAssetWriterInput(mediaType: AVMediaType.audio,&#xA;                                         outputSettings: audioSettings)&#xA;    &#xA;    self.audioInput?.expectsMediaDataInRealTime = true&#xA;            &#xA;    if self.assetWriter!.canAdd(self.audioInput!) {&#xA;        self.assetWriter?.add(self.audioInput!)&#xA;    }&#xA;    &#xA;    self.session.startRunning()&#xA;    &#xA;    DispatchQueue.main.async {&#xA;        self.session.beginConfiguration()&#xA;        &#xA;        self.session.commitConfiguration()&#xA;        &#xA;        let audioDevice = AVCaptureDevice.default(for: AVMediaType.audio)&#xA;        let audioIn = try? AVCaptureDeviceInput(device: audioDevice!)&#xA;        &#xA;        if self.session.canAddInput(audioIn!) {&#xA;            self.session.addInput(audioIn!)&#xA;        }&#xA;        &#xA;        if self.session.canAddOutput(self.audioOutput) {&#xA;            self.session.addOutput(self.audioOutput)&#xA;        }&#xA;        &#xA;        self.audioConnection = self.audioOutput.connection(with: AVMediaType.audio)&#xA;    }&#xA;}&#xA;&#xA;func startRecording() {&#xA;    if self.assetWriter?.startWriting() != true {&#xA;        print("error: \(self.assetWriter?.error.debugDescription ?? "")")&#xA;    }&#xA;    &#xA;    self.audioOutput.setSampleBufferDelegate(self, queue: self.recordingQueue)&#xA;}&#xA;&#xA;func stopRecording() {&#xA;    self.audioOutput.setSampleBufferDelegate(nil, queue: nil)&#xA;    &#xA;    self.assetWriter?.finishWriting {&#xA;        print("Saved in folder \(self.recordingURL!)")&#xA;    }&#xA;}&#xA;func captureOutput(_ captureOutput: AVCaptureOutput, didOutput&#xA;                    sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {&#xA;    &#xA;    if !self.isRecordingSessionStarted {&#xA;        let presentationTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)&#xA;        self.assetWriter?.startSession(atSourceTime: presentationTime)&#xA;        self.isRecordingSessionStarted = true&#xA;        recordingState = .capturing&#xA;    }&#xA;    &#xA;    var blockBuffer: CMBlockBuffer?&#xA;    var audioBufferList: AudioBufferList = AudioBufferList.init()&#xA;    &#xA;    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &amp;audioBufferList, bufferListSize: MemoryLayout<audiobufferlist>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, blockBufferOut: &amp;blockBuffer)&#xA;    let buffers = UnsafeMutableAudioBufferListPointer(&amp;audioBufferList)&#xA;    &#xA;    for buffer in buffers {&#xA;        let u8ptr = buffer.mData!.assumingMemoryBound(to: UInt8.self)&#xA;        let output = outputStream!.write(u8ptr, maxLength: Int(buffer.mDataByteSize))&#xA;        &#xA;        if (output == -1) {&#xA;            let error = outputStream?.streamError&#xA;            print("\(#file) > \(#function) > Error on outputStream: \(error!.localizedDescription)")&#xA;        }&#xA;        else {&#xA;            print("\(#file) > \(#function) > Data sent")&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;func ffmpeg_transmit() {&#xA;    &#xA;    let cmd1: String = "-f s16le -ar 48000 -ac 1 -i "&#xA;    let cmd2: String = " -probesize 32 -analyzeduration 0 -c:a libopus -application lowdelay -ac 1 -ar 48000 -f rtsp -rtsp_transport udp rtsp://localhost:18556/mystream"&#xA;    let cmd = cmd1 &#x2B; outPipePath! &#x2B; cmd2&#xA;    &#xA;    print(cmd)&#xA;    &#xA;    ffmpegSession = FFmpegKit.executeAsync(cmd, withExecuteCallback: { ffmpegSession in&#xA;        &#xA;        let state = ffmpegSession?.getState()&#xA;        let returnCode = ffmpegSession?.getReturnCode()&#xA;        if let returnCode = returnCode, let get = ffmpegSession?.getFailStackTrace() {&#xA;            print("FFmpeg process exited with state \(String(describing: FFmpegKitConfig.sessionState(toString: state!))) and rc \(returnCode).\(get)")&#xA;        }&#xA;    }, withLogCallback: { log in&#xA;        &#xA;    }, withStatisticsCallback: { statistics in&#xA;        &#xA;    })&#xA;}&#xA;</audiobufferlist>

    &#xA;

    I want to use MobileVLCKit in that way :

    &#xA;

    func startStream(){&#xA;    guard let url = URL(string: "rtsp://localhost:18556/mystream") else {return}&#xA;    audioPlayer!.media = VLCMedia(url: url)&#xA;&#xA;    audioPlayer!.media.addOption( "-vv")&#xA;    audioPlayer!.media.addOption( "--network-caching=10000")&#xA;&#xA;    audioPlayer!.delegate = self&#xA;    audioPlayer!.audio.volume = 100&#xA;&#xA;    audioPlayer!.play()&#xA;&#xA;}&#xA;

    &#xA;

    Could you give me some hints how to implement that ?

    &#xA;

  • converting images to mp4 using ffmpeg on iphone

    29 novembre 2011, par user633901

    Up till now, i can create mpeg1 but with no luck for mp4.Maybe we can talk and share information.Someone told me that i have to set some flags for using mp4.But i am stuck at using it...

    following is the working code :

    av_register_all();
    printf("Video encoding\n");

    /// find the mpeg video encoder
    //codec=avcodec_find_encoder(CODEC_ID_MPEG1VIDEO);
    codec = avcodec_find_encoder(CODEC_ID_MPEG4);

    if (!codec) {
       fprintf(stderr, "codec not found\n");
       exit(1);
    }

    c = avcodec_alloc_context();
    picture = avcodec_alloc_frame();

    // put sample parameters
    c->bit_rate = 400000;
    /// resolution must be a multiple of two
    c->width = 240;
    c->height = 320;
    //c->codec_id = fmt->video_codec;
    //frames per second
    c->time_base= (AVRational){1,25};
    c->gop_size = 10; /// emit one intra frame every ten frames
    c->max_b_frames=1;
    c->pix_fmt =PIX_FMT_YUV420P; // PIX_FMT_YUV420P

    if (avcodec_open(c, codec) &lt; 0) {
       fprintf(stderr, "could not open codec\n");
       exit(1);
    }

    f = fopen([[NSHomeDirectory() stringByAppendingPathComponent:@"test.mp4"] UTF8String], "wb");

    if (!f) {
       fprintf(stderr, "could not open %s\n",[@"test.mp4" UTF8String]);
       exit(1);
    }

    // alloc image and output buffer
    outbuf_size = 100000;
    outbuf = malloc(outbuf_size);
    size = c->width * c->height;

    #pragma mark -

    AVFrame* outpic = avcodec_alloc_frame();
    int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height); //this is half size of numbytes.

    //create buffer for the output image
    uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

    #pragma mark -  
    for(k=0;k&lt;1;k++) {
       for(i=0;i&lt;25;i++) {
           fflush(stdout);

           int numBytes = avpicture_get_size(PIX_FMT_RGBA, c->width, c->height);
           uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

           UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"%d.png", i+1]];
           CGImageRef newCgImage = [image CGImage];

           CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
           CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
           buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);  
           ///////////////////////////
           //outbuffer=(uint8_t *)CFDataGetBytePtr(bitmapData);
           //////////////////////////
           avpicture_fill((AVPicture*)picture, buffer, PIX_FMT_RGBA, c->width, c->height);
           avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);//does not have image data.

           struct SwsContext* fooContext = sws_getContext(c->width, c->height,
                                                         PIX_FMT_RGBA,
                                                         c->width, c->height,
                                                         PIX_FMT_YUV420P,
                                                         SWS_FAST_BILINEAR, NULL, NULL, NULL);

           //perform the conversion
           sws_scale(fooContext, picture->data, picture->linesize, 0, c->height, outpic->data, outpic->linesize);
           // Here is where I try to convert to YUV

           // encode the image
           out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
           printf("encoding frame %3d (size=%5d)\n", i, out_size);
           fwrite(outbuf, 1, out_size, f);

           free(buffer);
           buffer = NULL;
       }

       // get the delayed frames
       for(; out_size; i++) {
           fflush(stdout);

           out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
           printf("write frame %3d (size=%5d)\n", i, out_size);
           fwrite(outbuf, 1, outbuf_size, f);      
       }
    }

    // add sequence end code to have a real mpeg file
    outbuf[0] = 0x00;
    outbuf[1] = 0x00;
    outbuf[2] = 0x01;
    outbuf[3] = 0xb7;
    fwrite(outbuf, 1, 4, f);
    fclose(f);
    free(picture_buf);
    free(outbuf);

    avcodec_close(c);
    av_free(c);
    av_free(picture);
    //av_free(outpic);
    printf("\n");

    my msn:hieeli@hotmail.com

  • How to play mp4 video file with HTML5 video tag on iOS (iPhone and iPad) ?

    22 avril 2015, par Mokielas

    I want to display HTML5 video to my users using the video tag. For Firefox, Chrome and Opera WEBM works as expected. In Safari on Windows and Mac my MP4 version works, too. The only problem I’m experiencing is, that it won’t play on iPad and iPhone (Safari of course).

    Create video

    The MP4 (h.264 + acc-lc) is converted like this (with profile : baseline and level 3.0 for maximum compatibility with iOS) :

    • Stream #0:0(eng) : Video : h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x352, 198 kb/s, 17 fps, 17 tbr, 17408 tbn, 34 tbc (default)
    • Stream #0:1(eng) : Audio : aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 56 kb/s (default)

    Edit : Whole ffprobe output (slight changes in bitrate etc. to the above mentioned) :

    Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf56.30.100
    Duration: 00:01:00.05, start: 0.046440, bitrate: 289 kb/s
    Stream #0:0(eng): Video: h264 (Constrained Baseline)
    (avc1 /0x31637661), yuv420p, 640x352, 198 kb/s, 25 fps, 25 tbr,
    12800 tbn, 50 tbc (default)
    Metadata:
     handler_name    : VideoHandler
    Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz,
    stereo, fltp, 85 kb/s (default)
    Metadata:
     handler_name    : SoundHandler

    I found various requirements for iOS devices like this and this, also someone mentioned to add the yuv420p pixel format when converting.

    In fact the ffmpeg cmd looks like this :

    ffmpeg -i __inputfile__ -vcodec libx264 -pix_fmt yuv420p -profile:v baseline -level 3.0 -b:v 200K -r 17 -bt 800K -c:a libfdk_aac -b:a 85k -ar 44100 -y __outputfile_lowversion__.mp4

    Display video

    With Modernizr I detect which format is "supported" and add it to the src or the video tag. Last thing is adding the right MIME type. For mp4 I add type="video/mp4". The full code for the video tag is :

    <video class="p-video" preload="auto" autoplay="" type="video/mp4" src="http://full.url/to/video_low.mp4"></video>

    I tried various ways : own implementation with own interface, controls and stuff from browser vendors and video.js just to check whether i’m too studip for this. All work in the environments listed above except for iPhone and iPad.

    I read this article on Video on the web, especially this part and only serve the "right" file with the "right" type without a posterattribute set.

    My Apache has

    AddType video/mp4                      mp4 m4v f4v f4p
    AddType video/ogg                      ogv
    AddType video/webm                     webm

    And byte-ranges are enabled. This is needed to get partial content from the server.

    Has anyone a clue what’s going on there ? Thanks in advance !

    Edit : Safari and Chrome both throw MEDIA_ERR_SRC_NOT_SUPPORTED Error on iPad. There must be an issue with the encoding.