Skip to content
View CulturesSupports's full-sized avatar
💜
Original My Own
💜
Original My Own

Block or report CulturesSupports

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
CulturesSupports/Readme.md

our performance is now liquidated in to open for applications to run modern Code tools in the iOS kernel

wefound application to be used on iPhone to run ollama models

That same as having the models of Mollama running on iphone

Mollama is the only application available as a running the full suits that offer for ollama models

Duck means you can run as a computer on iPhone device and even fine-tune these models chat prompts on the phone as a computer or as a iMac or as a MacBook Pro but you simply using your iPhone and using Mollama application

You have the force now to involve prompting models with Mollama

The colonel from iPhone 12 and our bow will presently run this various smoothly and fast models outcomes because the machinery in those models fits fast transfer texting understanding with generative AI and text prompting so it's fetching up fast speed of generating code Just by having a iPhone about iPhone 12 and above .


Running the local application for the suit of the performance triggering clean graphic user in their face for text prompting.

use LM studio

https://lmstudio.ai/


image

Mollama application for generating

Models, with llama on iphone

https://www.mollama.com/

  • This running function, models and text pro thing and coding and cover for great making repositories on tub with professional outcome, layout and get AI for making sophisticated application with outcome of settling in the future demands

  • It can make also flutter applications and flutter in database technologies and serving fast trigger widgets. It's the fastest way possible to make a application with widgets for a flutter.

  • Special future for conductive matter, realization trigger real flutter applications with evil interface signs prompting by generative AI

  • So it's special suited for fast tracking out and finding what application you are making in mother of flutter

  • now you have a chance to make a flutter application in so little time us five minutes to have a full specific development of the flutter application that earlier took a day and maybe some weeks it snow and chance of you made prompted down to be made in five minutes for a flutter application and you don't need thousand code teams searching

  • This application AI is prompting this set of futures in mother of 5 to 10 minutes and your versatile have split structures the application in beauty fuel application forms with code presentations assigned step-by-step, how to install the necessary libraries to run the applications and the necessary widgets you should connect to make it enabled for fuel a good professional description product realization in code instruction




Using Huggingface datasets

https://huggingface.co/docs/datasets/quickstart

Using Huggingface Chat Local Host

https://huggingface.co/docs/chat-ui/index

Training a Datasets with AutoTrain

https://huggingface.co/docs/autotrain/quickstart_spaces

image

  • remember to insert a api key before starting huggin face project
  • some datasets require api insertion key from huggingface api


image

Apps Developed By The Huggingface Community

Apps you can test inside the website without local install needs

https://huggingface.co/spaces


Here is a Real Example App Where You Can Generate images

  • example type into the app :
a girl on beach are using sun glasses

a random image of a history event

a  robot running weapons from star wars movie

a hollywood 3d figure movie , ready for a film set

https://huggingface.co/spaces/mukaist/Midjourney










🔭 I’m currently working on ...

Setup Ollama Ai

  • We Setup Ollama apps and designing datasets and models and using ai to generate data models of apps
  • Design Ai and Train Ai
  • Make Ai Generative Text Prompt Communicate to Explain Knowledge And Results

image

We are Develop Generative Ai Apps for Local Code Development

  • Using Local Hosted App Interface to Run Ollama Models and Generate Bether App Than ChatGpt But Without The Hassle Drop of Limits

Having these tools makes us develop faster apps and specific
finding code and library and how the app should be coded
and ai gives also security tip considerations
along with the source code instructions

- ai enables us get in the place we are for finding out our agenda of using correct code development
- and use a nice interface to manage a professional ai code development help tool
- the code also can be managed solve our code waited 100 projects
and become a pro organisation of understand programming faster
- with ai code tools  we enabled understand python and c++ in few hours
- we went from not doing anything other than research
 to outfolding operate a ai organisation and execute coder projects

https://ollama.com/library

  • with ollama run localy we dont need the waiting time on ollama 2 ready prompt on Facebook Meta

14:14 in this video we learned how to generate and use the Ollama libraries by this guys tutorial on explain how to run a ollama prompt

https://youtu.be/V6LDl3Vjq-A?si=N4aQSaozzFfY9t2s






use Ai Chat to Train Ai ( restrictive to few prompts testing )

https://www.llama2.ai/

  • use ai to generate llama generative data
  • train ai to do operative target agenda
  • follow steps :

- holding github account
- share api key
- authenticate github


image

alternative 2 : run local in terminal ( the way to unlimited endless prompt ai )

https://ollama.com/download

  • register user
  • download ollama
  • run ollama after download
  • paste this command in terminal

ollama run llama3.2

  • you will need to download olama and install it local on machine
  • after its installed in application folder local on computer
  • run the gray window ollama run command in therminal
3 steps :


- install Ollama on local app folder
- run ollama
- insert this command in terminal after opened ollama installed software

ollama run llama3.2

  • instert only this command in terminal :

ollama run llama3.2


  • its the main olamma model, you start with train data for

  • experiment and commit to train datasets to commit special develop ai experiences

  • its datasets for program website with ai

  • its datasets for program apps

  • its datasets for design images with text image prompts

  • its datasets for design vidoes from text prompts

train ai to help automatic design website

- train ai to design apps
- train ai to design videos
- train ai to design films
- train ai to design youtube shorts videos
- train ai to design background images
- train ai to do help automation task of helping with answeers
- train ai to become a mental health partner and friend
- train ai to become a psychology expert
- train ai to make ebook story tellings


Working with more advanced models selections :

just select and test in terminal as done in alternative 2 after install

https://ollama.com/library

you find a model to put and train in terminal client

the olama repositories have a link window with a text command to put into terminal
- seperate commands and usernames are by what model to train ai for






image

Results in Ai Training App of Machine Learning Using Tensorflow

  • the ai developed a instance app in ai to perform generate a library drift app for specific iphone 13 and allocation run 7% of the instance for techologies image recognition and Image classifications

    • This means we solved a interface run of iphone 13 kernel run specific optimisation and enabled command prompt a development in 10 minutes and start the app dividend into c++ and python
  • it also runned the library of tensorflow

  • it also runned the needed code for execute the specific kernel as Mobilenet Tensorflow model without mention the prompts


>>> Make a Tensorflow app for image recognition and image classification
for mobile kernel processing
7% source allocation runs of iphone13 kernel OS

This means we now runs ai on specific kernel instance with mobilenets for mentioning traininng the specific kernel phone OS

image

TensorflowiphoneKernel.txt

https://github.com/CulturesSupports/CulturesSupports/blob/main/TensorflowiphoneKernel.txt

This is the result of what ai coded for us in under 10 minutes

Pinned Loading

  1. Local_developer Local_developer Public

    Personal internal repository Where Ole Writing Code Path

  2. Build-instruction Build-instruction Public

    AI generated receipts of build instructions

  3. Ollama-Studio Ollama-Studio Public

    Forked from lmstudio-ai/lms

    👾 LM Studio CLI - Run Ollama Local Query

    TypeScript

  4. CulturesSupports CulturesSupports Public

    Profile Description of Results in Machine Learning

    1