Skip to content

Commit

Permalink
0x2273
Browse files Browse the repository at this point in the history
  • Loading branch information
rajp152k committed Jun 23, 2024
1 parent aaad12c commit 27f45d2
Show file tree
Hide file tree
Showing 6 changed files with 168 additions and 5 deletions.
6 changes: 5 additions & 1 deletion Content/20230717115524-text_representation.org
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@
:END:
#+title: Text Representation
#+filetags: :nlp:
Note: we aproach this step post [[id:e9d75f9d-f8bf-4125-beb0-8ca34166ce9e][data engineering]].

Note: we approach this step post [[id:e9d75f9d-f8bf-4125-beb0-8ca34166ce9e][data engineering]].

AKA Textual feature representation

Expand Down Expand Up @@ -119,3 +120,6 @@ Some basic vectorization approaches:

* Relevant nodes
** [[id:20230713T110240.846573][Representation Learning]]
* Detour
- from a more abstract perspective, text is pretty personal to me as most of my journaling, logging, blogging and ideation takes place textually.
- This deserves a [[id:9e07b6d4-aa6a-4584-bb4e-6f1285be34c3][special treatment]] in the context of operating your environments.
5 changes: 4 additions & 1 deletion Content/20231227162344-computer_networks.org
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,10 @@ The OSI Layers:
The communication between any of the two layers across two computers is said to be compliant with the respective [[id:11d303f1-d337-4f51-b211-db435a9f2cd0][Protocol]].

Phsyical networking has its limitations in terms of the extent of simple abstractions you could employ. See [[id:714b029b-d0ac-4842-89f5-5f871d1a22c7][Software Defined Networking]] for alleviation of the same.


* Security
- the complete domain of[[id:6e9b50dc-c5c0-454d-ad99-e6b6968b221a][ CyberSecurity]] arises out of this node and will be given treatises in itself.
- one of the most pragmatic possible combinations of several technical domains out there, creatively.
* Practical
** [[id:24f4040a-7c18-416a-8460-e69280d437bf][Internet]]
** [[id:bc1cc0cf-5e6a-4fee-b9a5-16533730020a][Cloud Computing]]
Expand Down
13 changes: 12 additions & 1 deletion Content/20240206161614-cybersecurity.org
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
:ID: 6e9b50dc-c5c0-454d-ad99-e6b6968b221a
:ROAM_ALIASES: Hacking
:END:
#+title: CyberSecurity
#+title: Cyber Security
#+filetags: :programming:root:

* Abstract
Expand All @@ -12,6 +12,10 @@ I've Aliased this node as Hacking as well but I do acknowledge the more respectf

Wish to keep nodes relatively colloquial rather than being correct to a fault when it comes to common interpretation of these notes.

I'll refrain from using "hacking" anymore in these nodes unless I really mean hacking away at something with an axe.

As for thinking in an offensive manner, as many have already documented, understanding a system's weaknesses is the most explicit way towards strengthening its defenses.

* Resources
** Hacking: the Art of Exploitation
:PROPERTIES:
Expand All @@ -20,3 +24,10 @@ Wish to keep nodes relatively colloquial rather than being correct to a fault wh
- following the book to learn about what computers really are and to be able to think about secure software
- not explicitly relevant to my domain but I'm generically into computers and wish to know more
- will not be posting notes as they are from the textbook, but will be populating nodes without too many explicit linkages.

** Linux basics for Hackers
:PROPERTIES:
:ID: 310eb440-587c-4927-9b06-e2f3e0efb647
:END:
- occupy the web
- actively building cybersec skill set
9 changes: 9 additions & 0 deletions Content/20240619180844-text.org
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
:PROPERTIES:
:ID: 9e07b6d4-aa6a-4584-bb4e-6f1285be34c3
:END:
#+title: Text
#+filetags: :meta:

Text in computer science is a sequence of characters representing human-readable information. This contrasts with binary data directly processed by computers. Ultimately, all data stored on computers is represented by bits, but human-readable text is a happy medium that seems to be a good intermediary towards better interacting with machines in an efficient manner.

Efficient Text manipulation is then a good tool to have in your utility belt.
138 changes: 138 additions & 0 deletions Content/bib/references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -14,3 +14,141 @@ @misc{sheth_neurosymbolic_2023
annote = {Comment: To appear in IEEE Intelligent Systems},
file = {arXiv Fulltext PDF:/home/rp152k/Zotero/storage/XKHCL4IW/Sheth et al. - 2023 - Neurosymbolic AI -- Why, What, and How.pdf:application/pdf;arXiv.org Snapshot:/home/rp152k/Zotero/storage/JWNHUYEJ/2305.html:text/html},
}

@misc{garcez_neurosymbolic_2020,
title = {Neurosymbolic {AI}: {The} 3rd {Wave}},
shorttitle = {Neurosymbolic {AI}},
url = {http://arxiv.org/abs/2012.05876},
doi = {10.48550/arXiv.2012.05876},
abstract = {Current advances in Artificial Intelligence (AI) and Machine Learning (ML) have achieved unprecedented impact across research communities and industry. Nevertheless, concerns about trust, safety, interpretability and accountability of AI were raised by influential thinkers. Many have identified the need for well-founded knowledge representation and reasoning to be integrated with deep learning and for sound explainability. Neural-symbolic computing has been an active area of research for many years seeking to bring together robust learning in neural networks with reasoning and explainability via symbolic representations for network models. In this paper, we relate recent and early research results in neurosymbolic AI with the objective of identifying the key ingredients of the next wave of AI systems. We focus on research that integrates in a principled way neural network-based learning with symbolic knowledge representation and logical reasoning. The insights provided by 20 years of neural-symbolic computing are shown to shed new light onto the increasingly prominent role of trust, safety, interpretability and accountability of AI. We also identify promising directions and challenges for the next decade of AI research from the perspective of neural-symbolic systems.},
urldate = {2024-06-17},
publisher = {arXiv},
author = {Garcez, Artur d'Avila and Lamb, Luis C.},
month = dec,
year = {2020},
note = {arXiv:2012.05876 [cs]},
keywords = {Computer Science - Artificial Intelligence, Computer Science - Machine Learning, I.2.4, I.2.6},
annote = {Comment: 37 pages},
file = {arXiv Fulltext PDF:/home/rp152k/Zotero/storage/62RGF8PB/Garcez and Lamb - 2020 - Neurosymbolic AI The 3rd Wave.pdf:application/pdf;arXiv.org Snapshot:/home/rp152k/Zotero/storage/3QTEEPKH/2012.html:text/html},
}

@article{hitzler_neural-symbolic_2020,
title = {Neural-symbolic integration and the {Semantic} {Web}},
volume = {11},
issn = {22104968, 15700844},
url = {https://www.medra.org/servlet/aliasResolver?alias=iospress&doi=10.3233/SW-190368},
doi = {10.3233/SW-190368},
language = {en},
number = {1},
urldate = {2024-06-17},
journal = {Semantic Web},
author = {Hitzler, Pascal and Bianchi, Federico and Ebrahimi, Monireh and Sarker, Md Kamruzzaman},
editor = {Janowicz, Krzysztof},
month = jan,
year = {2020},
pages = {3--11},
file = {Hitzler et al. - 2020 - Neural-symbolic integration and the Semantic Web.pdf:/home/rp152k/Zotero/storage/TXQTYC6N/Hitzler et al. - 2020 - Neural-symbolic integration and the Semantic Web.pdf:application/pdf},
}

@misc{bottou_machine_2011,
title = {From {Machine} {Learning} to {Machine} {Reasoning}},
url = {http://arxiv.org/abs/1102.1808},
doi = {10.48550/arXiv.1102.1808},
abstract = {A plausible definition of "reasoning" could be "algebraically manipulating previously acquired knowledge in order to answer a new question". This definition covers first-order logical inference or probabilistic inference. It also includes much simpler manipulations commonly used to build large learning systems. For instance, we can build an optical character recognition system by first training a character segmenter, an isolated character recognizer, and a language model, using appropriate labeled training sets. Adequately concatenating these modules and fine tuning the resulting system can be viewed as an algebraic operation in a space of models. The resulting model answers a new question, that is, converting the image of a text page into a computer readable text. This observation suggests a conceptual continuity between algebraically rich inference systems, such as logical or probabilistic inference, and simple manipulations, such as the mere concatenation of trainable learning systems. Therefore, instead of trying to bridge the gap between machine learning systems and sophisticated "all-purpose" inference mechanisms, we can instead algebraically enrich the set of manipulations applicable to training systems, and build reasoning capabilities from the ground up.},
urldate = {2024-06-17},
publisher = {arXiv},
author = {Bottou, Leon},
month = feb,
year = {2011},
note = {arXiv:1102.1808 [cs]},
keywords = {Computer Science - Artificial Intelligence, Computer Science - Machine Learning},
annote = {Comment: 15 pages - fix broken pagination in v2},
file = {arXiv Fulltext PDF:/home/rp152k/Zotero/storage/MJ6VVSW2/Bottou - 2011 - From Machine Learning to Machine Reasoning.pdf:application/pdf;arXiv.org Snapshot:/home/rp152k/Zotero/storage/GYZQ2VD2/1102.html:text/html},
}

@misc{de_raedt_statistical_2020,
title = {From {Statistical} {Relational} to {Neuro}-{Symbolic} {Artificial} {Intelligence}},
url = {http://arxiv.org/abs/2003.08316},
doi = {10.48550/arXiv.2003.08316},
abstract = {Neuro-symbolic and statistical relational artificial intelligence both integrate frameworks for learning with logical reasoning. This survey identifies several parallels across seven different dimensions between these two fields. These cannot only be used to characterize and position neuro-symbolic artificial intelligence approaches but also to identify a number of directions for further research.},
urldate = {2024-06-17},
publisher = {arXiv},
author = {De Raedt, Luc and Dumančić, Sebastijan and Manhaeve, Robin and Marra, Giuseppe},
month = mar,
year = {2020},
note = {arXiv:2003.08316 [cs]},
keywords = {Computer Science - Artificial Intelligence},
file = {arXiv Fulltext PDF:/home/rp152k/Zotero/storage/U4FGFF77/De Raedt et al. - 2020 - From Statistical Relational to Neuro-Symbolic Arti.pdf:application/pdf;arXiv.org Snapshot:/home/rp152k/Zotero/storage/8IVPLMHU/2003.html:text/html},
}

@misc{noauthor_neuro-symbolic_nodate,
title = {Neuro-{Symbolic} {Artificial} {Intelligence} - workshops},
url = {https://people.cs.ksu.edu/~hitzler/nesy/},
urldate = {2024-06-17},
file = {Neuro-Symbolic Artificial Intelligence:/home/rp152k/Zotero/storage/63DNPJFW/nesy.html:text/html},
}

@misc{lamb_graph_2021,
title = {Graph {Neural} {Networks} {Meet} {Neural}-{Symbolic} {Computing}: {A} {Survey} and {Perspective}},
shorttitle = {Graph {Neural} {Networks} {Meet} {Neural}-{Symbolic} {Computing}},
url = {http://arxiv.org/abs/2003.00330},
doi = {10.48550/arXiv.2003.00330},
abstract = {Neural-symbolic computing has now become the subject of interest of both academic and industry research laboratories. Graph Neural Networks (GNN) have been widely used in relational and symbolic domains, with widespread application of GNNs in combinatorial optimization, constraint satisfaction, relational reasoning and other scientific domains. The need for improved explainability, interpretability and trust of AI systems in general demands principled methodologies, as suggested by neural-symbolic computing. In this paper, we review the state-of-the-art on the use of GNNs as a model of neural-symbolic computing. This includes the application of GNNs in several domains as well as its relationship to current developments in neural-symbolic computing.},
urldate = {2024-06-17},
publisher = {arXiv},
author = {Lamb, Luis C. and Garcez, Artur and Gori, Marco and Prates, Marcelo and Avelar, Pedro and Vardi, Moshe},
month = jun,
year = {2021},
note = {arXiv:2003.00330 [cs]},
keywords = {Computer Science - Artificial Intelligence, Computer Science - Machine Learning, Computer Science - Computation and Language, Computer Science - Logic in Computer Science},
annote = {Comment: Updated version, draft of accepted IJCAI2020 Survey Paper},
file = {arXiv Fulltext PDF:/home/rp152k/Zotero/storage/2SP4FU4E/Lamb et al. - 2021 - Graph Neural Networks Meet Neural-Symbolic Computi.pdf:application/pdf;arXiv.org Snapshot:/home/rp152k/Zotero/storage/URE2YKP9/2003.html:text/html},
}

@misc{battaglia_relational_2018,
title = {Relational inductive biases, deep learning, and graph networks},
url = {http://arxiv.org/abs/1806.01261},
doi = {10.48550/arXiv.1806.01261},
abstract = {Artificial intelligence (AI) has undergone a renaissance recently, making major progress in key domains such as vision, language, control, and decision-making. This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. However, many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches. In particular, generalizing beyond one's experiences--a hallmark of human intelligence from infancy--remains a formidable challenge for modern AI. The following is part position paper, part review, and part unification. We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between "hand-engineering" and "end-to-end" learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias--the graph network--which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have released an open-source software library for building graph networks, with demonstrations of how to use them in practice.},
urldate = {2024-06-17},
publisher = {arXiv},
author = {Battaglia, Peter W. and Hamrick, Jessica B. and Bapst, Victor and Sanchez-Gonzalez, Alvaro and Zambaldi, Vinicius and Malinowski, Mateusz and Tacchetti, Andrea and Raposo, David and Santoro, Adam and Faulkner, Ryan and Gulcehre, Caglar and Song, Francis and Ballard, Andrew and Gilmer, Justin and Dahl, George and Vaswani, Ashish and Allen, Kelsey and Nash, Charles and Langston, Victoria and Dyer, Chris and Heess, Nicolas and Wierstra, Daan and Kohli, Pushmeet and Botvinick, Matt and Vinyals, Oriol and Li, Yujia and Pascanu, Razvan},
month = oct,
year = {2018},
note = {arXiv:1806.01261 [cs, stat]},
keywords = {Computer Science - Artificial Intelligence, Computer Science - Machine Learning, Statistics - Machine Learning},
file = {arXiv Fulltext PDF:/home/rp152k/Zotero/storage/S6TRQ9KR/Battaglia et al. - 2018 - Relational inductive biases, deep learning, and gr.pdf:application/pdf;arXiv.org Snapshot:/home/rp152k/Zotero/storage/JRHFDMQ4/1806.html:text/html},
}

@misc{yi_neural-symbolic_2019,
title = {Neural-{Symbolic} {VQA}: {Disentangling} {Reasoning} from {Vision} and {Language} {Understanding}},
shorttitle = {Neural-{Symbolic} {VQA}},
url = {http://arxiv.org/abs/1810.02338},
doi = {10.48550/arXiv.1810.02338},
abstract = {We marry two powerful ideas: deep representation learning for visual recognition and language understanding, and symbolic program execution for reasoning. Our neural-symbolic visual question answering (NS-VQA) system first recovers a structural scene representation from the image and a program trace from the question. It then executes the program on the scene representation to obtain an answer. Incorporating symbolic structure as prior knowledge offers three unique advantages. First, executing programs on a symbolic space is more robust to long program traces; our model can solve complex reasoning tasks better, achieving an accuracy of 99.8\% on the CLEVR dataset. Second, the model is more data- and memory-efficient: it performs well after learning on a small number of training data; it can also encode an image into a compact representation, requiring less storage than existing methods for offline question answering. Third, symbolic program execution offers full transparency to the reasoning process; we are thus able to interpret and diagnose each execution step.},
urldate = {2024-06-17},
publisher = {arXiv},
author = {Yi, Kexin and Wu, Jiajun and Gan, Chuang and Torralba, Antonio and Kohli, Pushmeet and Tenenbaum, Joshua B.},
month = jan,
year = {2019},
note = {arXiv:1810.02338 [cs]},
keywords = {Computer Science - Artificial Intelligence, Computer Science - Machine Learning, Computer Science - Computation and Language, Computer Science - Computer Vision and Pattern Recognition},
annote = {Comment: NeurIPS 2018 (spotlight). The first two authors contributed equally to this work. Project page: http://nsvqa.csail.mit.edu},
file = {arXiv Fulltext PDF:/home/rp152k/Zotero/storage/WQ8SJ528/Yi et al. - 2019 - Neural-Symbolic VQA Disentangling Reasoning from .pdf:application/pdf;arXiv.org Snapshot:/home/rp152k/Zotero/storage/9CIDLP4A/1810.html:text/html},
}

@misc{vaswani_attention_2023,
title = {Attention {Is} {All} {You} {Need}},
url = {http://arxiv.org/abs/1706.03762},
doi = {10.48550/arXiv.1706.03762},
abstract = {The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.},
urldate = {2024-06-19},
publisher = {arXiv},
author = {Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N. and Kaiser, Lukasz and Polosukhin, Illia},
month = aug,
year = {2023},
note = {arXiv:1706.03762 [cs]},
keywords = {Computer Science - Computation and Language, Computer Science - Machine Learning},
annote = {Comment: 15 pages, 5 figures},
file = {arXiv Fulltext PDF:/home/rp152k/Zotero/storage/335MFALG/Vaswani et al. - 2023 - Attention Is All You Need.pdf:application/pdf;arXiv.org Snapshot:/home/rp152k/Zotero/storage/KR7F69YM/1706.html:text/html},
}
2 changes: 0 additions & 2 deletions Content/index.org
Original file line number Diff line number Diff line change
Expand Up @@ -65,8 +65,6 @@ The author intends to utilize this document as a personal knowledge base, emphas
- this creates a roam node that can then be referenced normally
- emacs isn't to be used to manipulate the references file
- always only export from zotero


** 0x2267
- setup an AI usage disclaimer
** 0x2262
Expand Down

0 comments on commit 27f45d2

Please sign in to comment.